• Defending democracy from cyberwarfare

    Foreign meddling in democratic elections, the proliferation of fake news and threats to national security through the “weaponization of social media” will be tackled by a new research Center being launched last week at Australia’s Flinders University.

  • Russian Twitter propaganda predicted 2016 U.S. election polls

    There is one irrefutable, unequivocal conclusion which both the U.S. intelligence community and the thorough investigation by Robert Mueller share: Russia unleashed an extensive campaign of fake news and disinformation on social media with the aim of distorting U.S. public opinion, sowing discord, and swinging the election in favor of the Republican candidate Donald Trump. But was the Kremlin successful in its effort to put Trump in the White House? Statistical analysis of the Kremlin’s social media trolls on Twitter in the run-up to the 2016 election social suggests that the answer is “yes.”

  • U.S. House passes election security bill after Russian hacking

    The U.S. House of Representatives, mostly along partisan lines, has passed legislation designed to enhance election security following outrage over Russian cyberinterference in the 2016 presidential election.The Democratic-sponsored bill would mandate paper ballot voting and postelection audit as well as replace outdated and vulnerable voting equipment. The House bill faces strong opposition in the Republican-controlled Senate.

  • Monitoring Russia’s and China’s disinformation campaigns in Latin America and the Caribbean

    Propaganda has taken on a different form. Social media and multiple sources of information have obviated the traditional heavy-handed tactics of misinformation. Today, governments and state media exploit multiple platforms to shade the truth or report untruths that exploit pre-existing divisions and prejudices to advance their political and geo-strategic agendas. Global Americans monitors four state news sources that have quickly gained influence in the region—Russia Today and Sputnik from Russia, and Xinhua and People’s Daily from China— to understand how they portray events for readers in Latin America and the Caribbean. Global Americans says it will feature articles that clearly intend to advance a partial view, agenda, or an out-and-out mistruth, labeling them either False or Misleading, explaining why the Global Americans team has determined them so, including a reference, if relevant, that disproves the article’s content.

  • Deepfakes: Forensic techniques to identify tampered videos

    Computer scientists have developed a method that performs with 96 percent accuracy in identifying deepfakes when evaluated on large scale deepfake dataset.

  • The confused U.S. messaging campaign on Huawei

    For the past several months, American policymakers have sought to convince allies, partners and potential partners to ban Chinese telecommunications company Huawei from supplying the entirety of, or components for, 5G communications networks around the world. This messaging campaign has centered primarily around concerns that Huawei could assist the Chinese government in spying on other countries or even shutting down or manipulating their 5G networks in a warlike scenario. Justin Sherman and Robert Morgus write in Lawfare that the United States’s international messaging on this issue—to allies, partners and potential partners alike—blurs the line between economic and national security risks, and it threatens to undermine U.S. efforts to message these risks in the process.

  • Top takes: Suspected Russian intelligence operation

    A Russian-based information operation used fake accounts, forged documents, and dozens of online platforms to spread stories that attacked Western interests and unity. Its size and complexity indicated that it was conducted by a persistent, sophisticated, and well-resourced actor, possibly an intelligence operation. Operators worked across platforms to spread lies and impersonate political figures, and the operation shows online platforms’ ongoing vulnerability to disinformation campaigns.

  • Identifying a fake picture online is harder than you might think

    Research has shown that manipulated images can distort viewers’ memory and even influence their decision-making. So the harm that can be done by fake images is real and significant. Our findings suggest that to reduce the potential harm of fake images, the most effective strategy is to offer more people experiences with online media and digital image editing – including by investing in education. Then they’ll know more about how to evaluate online images and be less likely to fall for a fake.

  • Germany warns Huawei to meet Germany’s security requirements

    Germany warned Huawei that the company must meet Germany’s security requirements before the company will be allowed to bid on building the 5G infrastructure in Germany. Germany has so far resisted U.S. pressure to exclude Huawei from the project. The United States has long suspected Huawei of serving the interests of Chinese intelligence, and Washington has argued that Huawei technology could be used for spying purposes by China.

  • The challenges of Deepfakes to national security

    Last Thursday, 13 June 2019, Clint Watts testified before the House Intelligence Committee of the growing dangers of Deepfakes – that is, false audio and video content. Deepfakes grow in sophistication each day and their dissemination via social media platforms is far and wide. Watts said: “I’d estimate Russia, as an enduring purveyor of disinformation, is and will continue to pursue the acquisition of synthetic media capabilities and employ the outputs against its adversaries around the world. I suspect they’ll be joined and outpaced potentially by China.” He added: “These two countries along with other authoritarian adversaries and their proxies will likely use Deepfakes as part of disinformation campaigns seeking to 1) discredit domestic dissidents and foreign detractors, 2) incite fear and promote conflict inside Western-style democracies, and 3) distort the reality of American audiences and the audiences of America’s allies.”

  • Deepfake myths: Common misconceptions about synthetic media

    There is finally some momentum to “do something” about deepfakes, but crucial misconceptions about deepfakes and their effect on our society may complicate efforts to develop a strategic approach to mitigating their negative impacts.

  • European elections suggest US shouldn’t be complacent in 202

    In many ways, the European Parliament elections in late May were calmer than expected. Cyber aggression and disinformation operations seem to not have been as dramatic as in 2016, when Russian hackers and disinformation campaigns targeted elections in the U.S., France and elsewhere around the world. However, there is no reason to be content. The dangers remain real. For one thing, the target societies might have internalized the cleavages and chaos from information operations or self-sabotaged with divisive political rhetoric. As a reaction, Russia may have scaled back its efforts, seeing an opportunity to benefit from lying low.

  • EU probe finds “continued, sustained” online disinformation by “Russian sources”

    The European Union says that it has gathered evidence of “continued and sustained” disinformation activity by Russia aimed at influencing the results of May’s elections for the European Parliament. The European Commission report said “Russian sources” tried to suppress voter turnout and influence voters’ preferences.

  • Alphabet-owned jigsaw bought a Russian troll campaign as an experiment

    For more than two years, the notion of social media disinformation campaigns has conjured up images of Russia’s Internet Research Agency, an entire company housed on multiple floors of a corporate building in St. Petersburg, concocting propaganda at the Kremlin’s bidding. But a targeted troll campaign today can come much cheaper—as little as $250, says Andrew Gully, a research manager at Alphabet subsidiary Jigsaw. He knows because that’s the price Jigsaw paid for one last year. Andy Greenberg writes in Wired that as part of research into state-sponsored disinformation that it undertook in the spring of 2018, Jigsaw set out to test just how easily and cheaply social media disinformation campaigns, or “influence operations,” could be bought in the shadier corners of the Russian-speaking web. In March 2018, after negotiating with several underground disinformation vendors, Jigsaw analysts went so far as to hire one to carry out an actual disinformation operation, assigning the paid troll service to attack a political activism website Jigsaw had itself created as a target.

  • A top voting-machine firm is finally taking security seriously

    Over the past 18 months, election-security advocates have been pushing for new legislation shoring up the nation’s election infrastructure. Election-security reform proposals enjoy significant support among Democrats—who control the House of Representatives—and have picked up some Republican cosponsors, too. Timothy B. Lee writes in Wired that such measures, however, have faced hostility from the White House and from the Republican leadership of the Senate. Legislation called the Secure Elections Act, cosponsored by senators James Lankford (R-Oklahoma) and Amy Klobuchar (D-Minnesota) last year, aimed to shore up the nation’s election security by providing states with new money to phase out paperless systems. But the Lankford-Klobuchar bill stalled in the face of opposition from the Trump administration and Senate Republicans. At this point, any election reform legislation looks unlikely to pass before the 2020 election.