Experts: Conditions behind cyberattack may be hard to mimic
screenshot of the warning screen from a purported ransomware attack, as
captured by a computer user in Taiwan, is seen on laptop in Beijing,
Saturday, May 13, 2017. Dozens of countries were hit with a huge
cyberextortion attack Friday that locked up computers and held users’
files for ransom at a multitude of hospitals, companies and government
agencies. (AP Photo/Mark Schiefelbein)
New York (AP) -
The cyberextortion attack hitting dozens of countries spread quickly and
widely thanks to an unusual confluence of factors: a known and highly
dangerous security hole in Microsoft Windows, tardy users who didn’t
apply Microsoft’s March software fix, and a software design that allowed
the malware to spread quickly once inside university, business and
Not to mention the
fact that those responsible were able to borrow weaponized software code
apparently created by the U.S. National Security Agency to launch the
attack in the first place.
Other criminals may
be tempted to mimic the success of Friday’s “ransomware” attack, which
locks up computers and hold people’s files for ransom. Experts say it
will be difficult for them to replicate the conditions that allowed the
so-called WannaCry ransomware to proliferate across the globe.
But we’re still
likely to be living with less virulent variants of WannaCry for some
time. And that’s for a simple reason: Individuals and organizations
alike are fundamentally terrible about keeping their computers
up-to-date with security fixes.
The worm turns ... and turns
One of the first
“attacks” on the internet came in 1988, when a graduate student named
Robert Morris Jr. released a self-replicating and self-propagating
program known as a “worm” onto the then-nascent internet. That program
spread much more quickly than expected, soon choking and crashing
machines across the internet.
The Morris worm
wasn’t malicious, but other nastier variants followed - at first for
annoyance, later for criminal purposes, such as stealing passwords. But
these worm attacks became harder to pull off as computer owners and
software makers shored up their defenses.
So criminals turned
to targeted attacks instead to stay below the radar. With ransomware,
criminals typically trick individuals into opening an email attachment
containing malicious software. Once installed, the malware just locks up
that computer without spreading to other machines.
The hackers behind
WannaCry took things a step further by creating a ransomware worm,
allowing them to demand ransom payments not just from individual but
from entire organizations - maybe even thousands of organizations.
The Perfect Storm
Once inside an
organization, WannaCry uses a Windows vulnerability purportedly
identified by the NSA and later leaked to the internet. Although
Microsoft released fixes in March, the attackers counted on many
organizations not getting around to applying those fixes. Sure enough,
WannaCry found plenty of targets.
professionals typically focus on building walls to block hackers from
entering, security tends to be less rigorous inside the network.
WannaCry exploited common techniques employees use to share files via a
penetrates the perimeter and then spreads inside the network tends to be
quite successful,” said Johannes Ullrich, director of the Internet Storm
Center at the SANS Institute.
“When any technique
is shown to be effective, there are almost always copycats,” said Steve
Grobman, chief technology officer of McAfee, a security company in Santa
Clara, California. But that’s complicated, because hackers need to find
security flaws that are unknown, widespread and relatively easy to
In this case, he
said, the NSA apparently handed the WannaCry makers a blueprint -
pre-written code for exploiting the flaw, allowing the attackers to
essentially cut and paste that code into their own malware.
chief research officer at the Helsinki-based cybersecurity company
F-Secure, said ransomware attacks like WannaCry are “not going to be the
norm.” But they could still linger as low-grade infections that flare up
from time to time.
For instance, the
Conficker virus, which first appeared in 2008 and can disable system
security features, also spreads through vulnerabilities in internal file
sharing. As makers of anti-virus software release updates to block it,
hackers deploy new variants to evade detection.
Conficker was more
of a pest and didn’t do major damage. WannaCry, on the other hand,
threatens to permanently lock away user files if the computer owner
doesn’t pay a ransom, which starts at $300 but goes up after two hours.
The damage might
have been temporarily contained. An unidentified young cybersecurity
researcher claimed to help halt WannaCry’s spread by activating a
so-called “kill switch.” Other experts found his claim credible. But
attackers can, and probably will, simply develop a variant to bypass
The attack is
likely to prompt more organizations to apply the security fixes that
would prevent the malware from spreading automatically. “Talk about a
wake-up call,” Hypponen said.
Companies are often
slow to apply these fixes, called patches, because of worries that any
software change could break some other program, possibly shutting down
“Whenever there is
a new patch, there is a risk in applying the patch and a risk in not
applying the patch,” Grobman said. “Part of what an organization needs
to understand and assess is what those two risks are.”
might prompt companies to reassess the balance. And while other
attackers might use the same flaw, such attacks will be steadily less
successful as organizations patch it.
Microsoft took the
unusual step late Friday of making free patches available for older
Windows systems, such as Windows XP from 2001. Before, Microsoft had
made such fixes available only to mostly larger organizations that pay
extra for extended support, yet millions of individuals and smaller
businesses still had such systems.
But there will be
other vulnerabilities to come, and not all of them will have fixes for
older systems. And those fixes will do nothing for newer systems if they
Unidentified young cybersecurity researcher’s “kill
Facebook ramps up its response to violent videos
blog post Wednesday, May 3, 2017, Zuckerberg said that Facebook will
hire another 3,000 people to review videos of crime and suicides
following murders shown live. (AP Photo/Eric Risberg, File)
New York (AP) -
Facebook is stepping up its efforts to keep inappropriate and often
violent material - including recent high-profile videos of murders and
suicides, hate speech and extremist propaganda - off of its site.
On Wednesday, the
world’s biggest social network said it plans to hire 3,000 more people
to review videos and other posts after getting criticized for not
responding quickly enough to murders shown on its service.
The hires over the
next year will be on top of the 4,500 people Facebook already tasks with
identifying criminal and other questionable material for removal. CEO
Mark Zuckerberg wrote Wednesday that the company is “working to make
these videos easier to report so we can take the right action sooner -
whether that’s responding quickly when someone needs help or taking a
Facebook, which had
18,770 employees at the end of March, would not say if the new hires
would be contractors or full-time workers. David Fischer, the head of
Facebook’s advertising business, said in an interview that the detection
and removal of hate speech and content that promotes violence or
terrorism is an “ongoing priority” for the company, and the community
operations teams are a “continued investment.”
Videos and posts
that glorify violence are against Facebook’s rules, but Facebook has
drawn criticism for responding slowly to such items, including video of
a slaying in Cleveland and the live-streamed killing of a baby in
Thailand. The Thailand video was up for 24 hours before it was removed.
In most cases, such
material gets reviewed for possible removal only if users complain. News
reports and posts that condemn violence are allowed. This makes for a
tricky balancing act for the company. Facebook does not want to act as a
censor, as videos of violence, such as those documenting police
brutality or the horrors of war, can serve an important purpose.
Policing live video
streams is especially difficult, as viewers don’t know what will happen.
This rawness is part of their appeal.
While the negative
videos make headlines, they are just a tiny fraction of what users post
every day. The good? Families documenting a toddler’s first steps for
faraway relatives, journalists documenting news events, musicians
performing for their fans and people raising money for charities.
“We don’t want to
get rid of the positive aspects and benefits of live streaming,” said
Benjamin Burroughs, a professor of emerging media at the University of
Nevada in Las Vegas.
Burroughs said that
Facebook clearly knew live streams would help the company make money,
as they keep users on Facebook longer, making advertisers happy. If
Facebook hadn’t also considered the possibility that live streams of
crime or violence would inevitably appear alongside the positive stuff,
“they weren’t doing a good enough job researching implications for
societal harm,” Burroughs said.
With a quarter of
the world’s population on it, Facebook can serve as a mirror for
humanity, amplifying both the good and the bad - the local fundraiser
for a needy family and the murder-suicide in a faraway corner of the
planet. But lately, it has gotten outsized attention for its role in the
latter, whether that means allowing the spread of false news and
government propaganda or videos of horrific crimes.
livestreaming murder or depicting kidnapping and torture have made
international headlines even when the crimes themselves wouldn’t have,
simply because they were on Facebook, visible to people who wouldn’t
have seen them otherwise.
As the company
introduces even more new features, it will continue to grapple with the
reality that they will not always be used for positive or even mundane
purposes. From his interviews and Facebook posts, it appears that
Zuckerberg is at least aware of this, even if his company doesn’t always
respond as quickly as outsiders would like.
heartbreaking, and I’ve been reflecting on how we can do better for our
community,” Zuckerberg wrote on Wednesday about the recent videos.
It’s a learning
curve for Facebook. In November, for example, Zuckerberg called the idea
that false news on Facebook influenced the U.S. election “crazy.” A
month later, the company introduced a slew of initiatives aimed at
combating false news and supporting journalism. And just last week, it
acknowledged that governments or others are using its social network to
influence political sentiment in ways that could affect national
What to do
Facebook workers review “millions of reports” every week. In addition to
removing videos of crime or getting help for someone who might hurt
themselves, he said, the company’s bulked-up reviewing force will “also
help us get better at removing things we don’t allow on Facebook like
hate speech and child exploitation.”
announcement is a clear sign that Facebook continues to need human
reviewers to monitor content, even as it tries to outsource some of the
work to software due in part to its sheer size and the volume of stuff
It’s not all up to
Facebook, though. Burroughs said users themselves need to decide whether
they want to look at violent videos posted on Facebook or to circulate
them, for example. And he urged news organizations to consider whether
each Facebook live-streamed murder is a story.
“We have to be
careful that it doesn’t become a kind of voyeurism,” he said.
Twitter eases 140-character
limit in replies
(AP Photo/Marcio Jose
New York (AP) - Twitter has
found more creative ways to ease its 140-character limit without
officially raising it.
Now, the company says that when you
reply to someone - or to a group - usernames will no longer count toward
those 140 characters. This will be especially helpful with group
conversations, where replying to two, three or more users at a time
could be especially difficult with the character constraints.
When users reply, the names of the
people they are replying to will be on top of the text of the actual
tweet, rather than a part of it.
Last year, Twitter said it would
stop counting photos, videos, quote tweets, polls and GIF animations
toward the character limit. Twitter also said it would stop counting
usernames, but the change did not go into effect until now.
Twitter, which has been struggling
to attract new users, has been trying to appeal to both proponents and
opponents by sticking to the current limit while allowing more freedom
to express thoughts, or rants, through images and other media.
Twitter’s character limit was
created so that tweets could fit into a single text message, back in the
heyday of SMS messaging. But now, most people use Twitter through its
mobile app. There isn’t the same technical constraint, just a desire on
Twitter’s part to stay true to its roots.
Of course, there are ways to get
around the limit, such as sending out multi-part tweets, or taking
screenshots of text typed elsewhere.
Google expands fact checking in news searches
New York (AP) - Google will
expand the use of “fact check” tags in its search results - the tech
industry’s latest effort to combat false and misleading news stories.
People who search for a topic in
Google’s main search engine or the Google News section will see a
conclusion such as “mostly true” or “false” next to stories that have
been fact checked.
Google has been working with more
than 100 news organizations and fact-checking groups, including The
Associated Press, the BBC and NPR. Their conclusions will appear in
search results as long as they meet certain formatting criteria for
Google said only a few of those
organizations, including PolitiFact and Snopes. com, have already met
those requirements; The Washington Post also says it complies. Google
said it expects the ranks of compliant organizations to grow following
Not all news stories will be fact
checked. Multiple organizations may reach different conclusions; Google
will show those separately.
Still unanswered is whether these
fact-check analyses will sway people who are already prone to believe
false reports because they confirm preconceived notions.
Glenn Kessler, who writes “The Fact
Checker” column <Error!
Hyperlink reference not valid.> at The Washington Post,
said in an email that Google’s efforts should at least “make it easier
for people around the world to obtain information that counters the spin
by politicians and political advocacy groups, as well as purveyors of
He added that “over time, I expect
that people increasingly will want to read a fact-check on a
controversial issue or statement, even if the report conflicts with
their political leanings.”
Google started offering fact check
tags in the U.S. and the U.K. in October and expanded the program to a
handful of other countries in the subsequent months. Now the program is
open to the rest of the world and to all languages.
False news and misinformation,
often masquerading as trustworthy news that spreads on social media, has
gained attention since the 2016 U.S. presidential election.
Google’s announcement comes a day
after Facebook launched a resource to help users spot false news and
misleading information that spreads on its service. The resource is
basically a notification that pops up for a few days. Clicking on it
takes people to tips and other information on how to spot false news and
what to do about it.
Google affiliate offers tools to safeguard elections
New York (AP) -
An organization affiliated with Google is
offering tools that news organizations and election-related sites can use to
protect themselves from hacking.
Jigsaw, a research arm
of Google parent company Alphabet Inc., says that free and fair elections
depend on access to information. To ensure such access, Jigsaw says, sites
for news, human rights and election monitoring need to be protected from
Jigsaw’s suite of
tools, called Protect Your Election, is mostly a repackaging of existing
- Project Shield will
help websites guard against denial-of-service attacks, in which hackers
flood sites with so much traffic that legitimate visitors can’t get through.
Users of Project Shield will be tapping technology and servers that Google
already uses to protect its own sites from such attacks.
- Password Alert is
software that people can add to Chrome browsers to warn them when they try
to enter their Google password on another site, often a sign of a phishing
- 2-Step Verification
helps beef up security beyond passwords by requiring a second access code,
such as a text sent to a verified cellphone. Though Jigsaw directs users to
turn this on for Google accounts, most major rivals offer similar
“This is as much an
occasion to have a conversation about digital security as it is putting all
the tools in one place,” Jigsaw spokesman Dan Keyserling said.
While the tools can be
useful to a variety of groups and individuals, Jigsaw says it is focusing on
elections because cyberattacks often increase against news organizations and
election information sites around election time. In particular, Jigsaw wants
to help sites deploy the tools ahead of the French presidential elections,
which begin April 23.
The tools are free,
though Project Shield is limited to news organizations, individual
journalists, human-rights groups and election-monitoring organizations.
It’s not known whether
the tools might have prevented some of the high-profile attacks in the past,
including the theft of emails from Democratic Party computers during the
2016 U.S. presidential campaign. The tools do not directly address such
break-ins, but they could help guard against password stealing, a common
precursor to break-ins.
Got camera? Facebook adds
more Snapchat-like features
provided by Facebook shows an overview of the features of Facebook’s new app
update on an iPhone. (Facebook via AP)
New York (AP) - Facebook is
adding more Snapchat-like features to its app. The company says it wants to
let your camera “do the talking” as more people are posting photos and
videos instead of blocks of text.
Facebook is rolling out an app update
that started Tuesday, March 28. With it, you can tap a new camera icon on
the top left corner. That opens up the phone’s camera to do a photo or video
post. You could have posted photos from the app before, but it took an extra
Once you open the camera, you’ll find
Facebook’s other new Snapchat-like features, including filters that can be
added to images.
Other effects, such as animations and
other interactive filters, are a new twist to dressed-up photos.
Also new is a “stories” tool that lets
you post photos and videos that stay live for 24 hours. This feature is
already available on Messenger and Instagram, which is owned by Facebook.
Snapchat pioneered camera-first sharing
and is wildly popular with younger users. Years ago, Facebook tried to buy
the company but was rebuffed. Since then, it has been trying, with varying
degrees of success, to clone Snapchat’s most popular features.
It might be working: Snapchat’s growth
rate has slowed down since Instagram introduced its own “stories” feature.