• U.S. Cyber Command cut Russian troll factory’s access to the internet

    The U.S. Cyber Command blocked the internet access of the St. Petersburg’s-based Internet Research Agency (IRA), a Russian disinformation and propaganda outfit which was contracted by the Kremlin to orchestrate the social media disinformation campaign to help Donald Trump win the 2016 presidential election. The IRA’s access to the internet was blocked on midterms Election Day, and for a few days following the election.

  • Telegram used by ISIS to spread propaganda globally

    The Counter Extremism Project (CEP) this week reports about a Telegram channel that called for lone actor terrorist attacks in London, alongside other online websites that host ISIS videos and propaganda online. The encrypted messaging app is the platform of choice for terrorist group to call for violence.

  • U.S. hate groups hit record number last year amid increased violence

    American hate groups had a bumper year in 2018 as a surge in black and white nationalist groups lifted their number to a new record high, the Southern Poverty Law Center said in a report issued Wednesday. The increase was driven by growth in both black and white nationalist groups, the SPLC said. The number of white nationalist groups jumped from 100 to 148, while the number of black nationalist groups — typically anti-Semitic, anti-LGBTQ and anti-white — rose from 233 to 264. Some conservative groups have accused the SPLC of unfairly labeling them as “hate groups,” and last month, the Center for Immigration Studies sued the SPLC for “falsely designating” it as a hate group in 2016, saying the SPLC has produced no evidence that the group maligns immigrants as a class.

  • Putting data privacy in the hands of users

    In today’s world of cloud computing, users of mobile apps and web services store personal data on remote data center servers. Services often aggregate multiple users’ data across servers to gain insights on, say, consumer shopping patterns to help recommend new items to specific users, or may share data with advertisers. Traditionally, however, users haven’t had the power to restrict how their data are processed and shared. New platform acts as a gatekeeper to ensure web services adhere to a user’s custom data restrictions.

  • Don’t be fooled by fake images and videos online

    Advances in artificial intelligence have made it easier to create compelling and sophisticated fake images, videos and audio recordings. Meanwhile, misinformation proliferates on social media, and a polarized public may have become accustomed to being fed news that conforms to their worldview. All contribute to a climate in which it is increasingly more difficult to believe what you see and hear online. There are some things that you can do to protect yourself from falling for a hoax. As the author of the upcoming book Fake Photos, to be published in August, I’d like to offer a few tips to protect yourself from falling for a hoax.

  • Are Russian trolls saving measles from extinction?

    Scientific researchers say Russian social-media trolls who spread discord before the 2016 U.S. presidential election may also contributed to the 2018 outbreak of measles in Europe that killed 72 people and infected more than 82,000 — mostly in Eastern and Southeastern European countries known to have been targeted by Russia-based disinformation campaigns. Experts in the United States and Europe are now working on ways to gauge the impact that Russian troll and bot campaigns have had on the spread of the disease by distributing medical misinformation and raising public doubts about vaccinations.

  • Russia is attacking the U.S. system from within

    A new court filing submitted last Wednesday by Special Counsel Robert Mueller shows that a Russian troll farm currently locked in a legal battle over its alleged interference in the 2016 election appeared to wage yet another disinformation campaign late last year—this time targeting Mueller himself. Concord Management and Consulting is accused of funding the troll farm, known as the Internet Research Agency. But someone connected to Concord allegedly manipulated the documents and leaked them to reporters, hoping the documents would make people think that Mueller’s evidence against the troll farm and its owners was flimsy. Natasha Bertrand writes that “The tactic didn’t seem to convince anyone, but it appeared to mark yet another example of Russia exploiting the U.S. justice system to undercut its rivals abroad.”

  • Fake news detector algorithm works better than human screeners

    An algorithm-based system that identifies telltale linguistic cues in fake news stories could provide news aggregator and social media sites like Google News with a new weapon in the fight against misinformation. The researchers who developed the system have demonstrated that it’s comparable to and sometimes better than humans at correctly identifying fake news stories.

  • Peering under the hood of fake-news detectors

    New work from MIT researchers peers under the hood of an automated fake-news detection system, revealing how machine-learning models catch subtle but consistent differences in the language of factual and false stories. The research also underscores how fake-news detectors should undergo more rigorous testing to be effective for real-world applications.

  • Want to squelch fake news? Let the readers take charge

    Would you like to rid the internet of false political news stories and misinformation? Then consider using — yes — crowdsourcing. That’s right. A new study co-authored by an MIT professor shows that crowdsourced judgments about the quality of news sources may effectively marginalize false news stories and other kinds of online misinformation.

  • Russia’s hostile measures threaten Europe: Report

    A new RAND report examines current Russian hostile measures in Europe and forecasts how Russia might threaten Europe using these measures over the next few years. “Whatever the U.S. response, preparation for involvement in a wide range of conflicts can help reduce the risk of mismanagement, miscalculation, and escalation,” the report’s authos say.

  • Kansas anti-Muslim bomb plotters sentenced to long prison terms

    Three members of a far-right militia, who were convicted of plotting to massacre Muslims in southwest Kansas immediately after the November 2016 election, were sentenced Friday to decades in prison. The terrorist plot was foiled after another militia member informed the police. Defense attorneys, in their sentencing memo, vigorously presented what came to be known as The Trump Defense: They argued that Trump’s anti-Muslim rhetoric during the 2016 election made attacks against Muslims appear legitimate. The defense attorneys also argued that the plot architect had been “immersed” in Russian disinformation and far-right propaganda, leading him to believe that if Donald Trump won the election, then-President Barack Obama would declare martial law and not recognize the validity of the election — forcing armed militias to step in to ensure that Trump became president.

  • 2016 Twitter fake news engagement: Highly concentrated and conservative-leaning

    By studying how more than 16,000 American registered voters interacted with fake news sources on Twitter during the 2016 U.S. presidential election, researchers report that engagement with fake news was extremely concentrated. Only a small fraction of Twitter users accounted for the vast majority of fake news exposures and shares, they say, many among them older, conservative and politically engaged.

  • Cloaking location on mobile devices to protect privacy

    We agree to give up some degree of privacy anytime we search Google to find a nearby restaurant or use other location-based apps on our mobile devices. The occasional search may be fine, but researchers says repeatedly pinpointing our location reveals information about our identity, which may be sold or shared with others. The researchers say there is a way to limit what companies can glean from location information.

  • On Facebook and Twitter, even if you don’t have an account, your privacy is at risk

    Individual choice has long been considered a bedrock principle of online privacy. If you don’t want to be on Facebook, you can leave or not sign up in the first place. Then your behavior will be your own private business, right? A new study shows that privacy on social media is like second-hand smoke. It’s controlled by the people around you.