Twitter on Tuesday moved to further stifle abusive commentary at its live video streaming application Periscope.

A new tool introduced by the one-to-many messaging service lets people viewing Periscope broadcasts quickly report what they feel are inappropriate comments.

Small groups of randomly selected viewers will then be polled, with votes determining whether they agree that comments are abusive or "spam."

Those found guilty as charged will be temporarily suspended from commenting further in the related Periscope broadcast, according to Twitter.

Repeat offenders will eventually be blocked from commenting for the remainder of a broadcast.

"One of the unique things about Periscope is that you're often interacting with people you don't know; that immediate intimacy is what makes it such a captivating experience," Periscope chief executive and co-founder Kayvon Beykpour said in a statement.

"But, that intimacy can also be a vulnerability if strangers post abusive comments."

Beykpour depicted the new tool as a way to transparently tap into the viewing community to help "moderate bad actors."

Periscope allows anyone to broadcast live to a global audience and enables viewers to interact in real time.

The use of impromptu juries of viewers to judge abuses will work in tandem with pre-existing tools for reporting or blocking nastiness, according to Twitter.

Twitter and other Internet titans face the challenge of letting people connect and share online while preventing bullying, threats, and other foul behavior.

The Internet was expected to facilitate better exchanges between the public and news media. But vile and hateful comments changed all that.