Connect with us

Tech

In Trying To Clear “Confusion” Over Anti-Harassment Policy, YouTube Creates More Confusion.

Published

on

After a series of tweets that made it seem as if YouTube was contradicting its own anti-harassment policies, the video platform published a blog post in an attempt to clarify its stance. But even though the post is supposed to “provide more details and context than is possible in any one string of tweets” and promises that YouTube will reexamine its harassment policy, it raises yet more questions about how serious YouTube is about combatting harassment and hate speech on its platform—especially if the abuse comes from a high-profile channel with million of subscribers.

YouTube is currently under fire for not taking earlier, more decisive actions against conservative commentator Steven Crowder after he made homophobic and racist comments about Vox reporter Carlos Maza in multiple videos. The platform eventually demonetized Crowder’s channel, which currently has more than 3.8 million subscribers, but then stated it would allow Crowder to start making ad revenue again if he fixed “all of the issues” with his channel and stopped linking to an online shop that sold shirts saying “Socialism is for f*gs.”

Before demonetizing Crowder’s channels, YouTube responded to Maza in a series of tweets that created confusion about how it enforces it policies. The platform said after an “in-depth review” of flagged videos by Crowder, it decided that even though the language they contained was “clearly hurtful,” the videos did not violate its policies because “as an open platform, it’s crucial for us to allow everyone-from creators to journalists to late-night TV hosts-to express their opinions w/in the scope of our policies.” This was in spite of the fact that Crowder’s derogatory references to Maza’s ethnicity and sexual orientation violate several of YouTube’s policy against harassment and cyberbullying, including “content that makes hurtful and negative personal comments/videos about another person.”

Carlos Maza
I’ve been called an anchor baby, a lispy queer, a Mexican, etc. These videos get millions of views on YouTube. Every time one gets posted, I wake up to a wall of homophobic/racist abuse on Instagram and Twitter.

In the new blog post, posted by YouTube head of communications Chris Dale, the platform gives a lengthy explanation of how it attempts to draw the line between things like “edgy stand-up comedy routines” and harassment. But in the case of Crowder’s persistent attacks on Maza, YouTube repeated its stance that the videos flagged by users “did not violate our Community Guidelines.”

As an open platform, we sometimes host opinions and views that many, ourselves included, may find offensive. These could include edgy stand-up comedy routines, a chart-topping song, or a charged political rant — and more. Short moments from these videos spliced together paint a troubling picture. But, individually, they don’t always cross the line.

There are two key policies at play here: harassment and hate speech. For harassment, we look at whether the purpose of the video is to incite harassment, threaten or humiliate an individual; or whether personal information is revealed. We consider the entire video: For example, is it a two-minute video dedicated to going after an individual? A 30-minute video of political speech where different individuals are called out a handful of times? Is it focused on a public or private figure? For hate speech, we look at whether the primary purpose of the video is to incite hatred toward or promote supremacism over a protected group; or whether it seeks to incite violence. To be clear, using racial, homophobic, or sexist epithets on their own would not necessarily violate either of these policies. For example, as noted above, lewd or offensive language is often used in songs and comedic routines. It’s when the primary purpose of the video is hate or harassment. And when videos violate these policies, we remove them.

The decision to demonetize Crowder’s channel was ultimately made because “we saw the widespread harm to the YouTube community resulting from the ongoing pattern of egregious behavior, took a deeper look, and made the decision to suspend monetization,” Dale wrote. In order to start earning ad revenue again, “all relevant issues with the channel need to be addressed, including any videos that violate our policies, as well as things like offensive merchandise,” he added.

The latest YouTube controversy is both upsetting and exhausting, because it is yet another reminder of the company’s lack of action against hate speech and harassment, despite constantly insisting that it will do better (just yesterday, for example, YouTube announced that it will ban videos that support views like white supremacy, Nazi ideology or promote conspiracy theories that deny events like the Holocaust or Sandy Hook).

The passivity of social media companies when it comes to stemming the spread of hate through its platforms has real-life consequences (for example, when (Maza was doxxed and harassed by fans of Crowder last year), and no amount of prevarication or distancing can stop the damage once its been done.

Source: TechCrunch.

Continue Reading

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Instagram Makes It Easier To Take Back Hacked Accounts.

Published

on

Instagram is finally addressing a huge problem on its platform: hacked accounts.

The company says it is making a series of changes that will make it easier for people to regain access to a hacked account. The update comes almost a year after Mashable first reported that a wave of bizarre hacks had hit Instagram users, leaving them little recourse to get their accounts back.

With the newly announced changes, which are currently being tested ahead of a wider rollout, Instagram will allow users to access its account recovery tools directly in the app, even if a hacker has changed their account information. So when a person is unable to login to an account, Instagram will prompt users to enter information associated with your account like your email address or phone number. (Users can also access this via "need more help" in the app's login screen.)

From there, Instagram will send a verification code you can use to access your account. Instagram will also remove any other devices logged into your account, so a hacker who has access to your email will be unable to use the recovery code.

This may sound fairly straightforward, but these changes address significant issues with Instagram's previous account recovery process. Because hackers often changed the email, phone number, or username associated with an account, it could be incredibly difficult if not impossible for the actual account owner to navigate the automated support system.

Users have reported Instagram sending recovery emails to the address of their hackers, for example, or inexplicably telling them it could not verify their identity even though they provided the information requested. This caused some people to resort to more elaborate schemes, such as reporting a hacked account for impersonation or leaving voicemails for Instagram support.

This new process will hopefully make those kinds of moves a thing of the past, as Instagram says its goal is to move the entire account recovery process in-app. Additional support will still be available to those who need it though, according to an Instagram spokesperson.

Notably, this new process will also apply to people whose accounts have previously been hacked and unable to regain access.

Additionally, Instagram says it's addressing another major issue often associated with hacked accounts: username theft. Because accounts that have short or original names are considered valuable and desirable, they often face a disproportionate amount of hacking attempts. Hackers will often change a username in order to scoop it up for a fresh account or sell it on shady forums.

Now, Instagram says that a previously used username will not be available for anyone else for several days in order to make it more difficult for hackers to steal valuable usernames. (The company isn't disclosing exactly how long names will be inaccessible to others but a spokesperson says it will be "multiple days.")

While it's unlikely these changes will put a stop to hacking attempts, or the massive business of buying and selling stolen accounts, it could make life more difficult for hackers — at least until they find new ways to circumvent Instagram's policies. But it should also give users more power to get their accounts back.

Source: Mashable

Continue Reading

Tech

YouTube Reportedly Considers Moving All Children’s Content To YouTube Kids.

Published

on

YouTube is reportedly considering major changes after a long string of terrible headlines involving everything from pedophiles to gun-wielding Disney characters.

The company might remove all children's content from YouTube and show it exclusively in the YouTube Kids app, a new Wall Street Journal article says.

The other option under discussion involves entirely turning off auto-playing recommended videos on children's content. This is the system that leads viewers from a seemingly harmless video to extreme content and conspiracy theories.

These changes would be immense for YouTube. The platform has reportedly been relying on down-ranking and reducing the reach of controversial content, rather than removing it outright. But turning off the recommendation algorithm for children altogether would amount to some sort of admission that it's the platform's architecture — not the content — that is the problem.

It could also potentially affect revenue by moving a sizable chunk of videos off the platform, away from YouTube's advertisers. YouTube Kids does have ads, but there are additional requirements for advertisers there.

The Journal also reports that Google CEO Sundar Pichai has recently been taking a more active role in the management of YouTube, which is run by Susan Wojcicki. Recent scandals involving the wildfire-like spread of the Christchurch shooting video and pedophilia rings enabled by YouTube's recommendation algorithm have reportedly caused internal upheaval.

In 2018, YouTube Kids added controls to allow parents to manually select the channels and creators that their kids would be able to watch. It also added more human moderators to remove harmful content. But not all kids watch videos on the Kids app alone, which means they currently could be exposed to the same algorithmic wormhole that adults are.

YouTube told the Journal that it considers "lots of ideas for improving YouTube and some remain just that—ideas."

Source: Mashable

Continue Reading

Tech

Google Desperately Wants To Win Over Geeks’ Hearts

Published

on

Google's acting really strange these days.

First, the company basically says "fuck it," then both confirms the Pixel 4 and its huge square-shaped camera bump. And now they company has publicly admitted to Business Insider that they've canceled two unreleased tablets and will instead focus on making Pixelbook laptops.

These two PR moves are unusual for a tech company. Usually, outfits like Google never acknowledge upcoming products. Why would they? It would take all the excitement out of their own launch event.

Moreover, tech companies don't ever talk about canceled products because they'll never see the light of day. No point in getting people all worked up over products that technically don't exist.

That's why it's so out of character for Google to suddenly be so open. What's the goal here?

Maybe these two instances are unrelated, but to a tech observer like myself, it sure looks like Google's trying its hardest to court geeks in an effort to convince super fans that it's serious about hardware this year. In fact, these moves feel like they're coming straight out of startup phone maker OnePlus' playbook, which has built its fanbase catering to geeks as well.

Source: Mashable

Continue Reading

Tech

Apple Recalls MacBook Pro Batteries Over ‘Fire Safety Risk’

Published

on

If you have an older MacBook Pro, you might need to get its battery replaced.

Apple is recalling 15-inch MacBook Pro laptops sold between between September 2015 and February 2017 over a battery issue it says poses a "fire safety risk."

"Apple has determined that, in a limited number of older generation 15-inch MacBook Pro units, the battery may overheat and pose a fire safety risk," the company writes on a support page about the recall.

Affected laptops should not be used until the company can issue battery replacements, Apple says. The recall only applies to 15-inch Pro models and other MacBooks are unaffected. Even if you're not totally sure if your laptop is impacted, it's probably a good idea to double check.

Here's how Apple recommends you check to see if your laptop is affected:

To confirm which model you have, choose About This Mac from the Apple menu () in the upper-left corner of your screen. If you have “MacBook Pro (Retina, 15-inch, Mid 2015),” enter your computer's serial number on the program page to see if it is eligible for a battery replacement.

The recall comes shortly after one musician posted videos of his smoking MacBook Pro, which he said "exploded" after normal use. The musician, who goes by the name "White Panda," told Mashable in an interview that he had his laptop in his lap when smoke suddenly began pouring out of it. It later "popped" and caught fire.

It's not clear if the current recall is related to that issue, but Apple does make it very clear that the MacBook Pros in question could pose a serious safety risk.

Source: Mashable

Continue Reading

Trending