In an attempt to restrict disrespectful commentary on its platform, tech giant Google is set to introduce a new feature that will notify users when their comments may be considered offensive to others with an option to review them before posting.
As users type in their comment in the window and move towards posting it, YouTube’s AI-based systems, if it deems the content as offensive will show a so-called ‘review notification’ window from which the commentator can either proceed without altering his content and post it or can just pause a moment and assess their comment before posting it.
YouTube’s Vice President of Product Management, Mr. Johanna Wright updated that the platform will test a new filter in YouTube Studio for potentially inappropriate and hurtful comments that have been automatically held for review. This will allow creators to better manage comments and connect with their audience.
In a blog post, Mr. Wright remarked that the new system exists “so that creators don’t ever need to read them if they don’t want to.” while adding that YouTube will also “streamline the comment moderation tools to make this process even easier for creators.”
YouTube is expected to ask creators to provide information about their gender, sexual orientation, race and ethnicity on a voluntary basis from 2021.
“We’ll then look closely at how content from different communities is treated in our search and discovery and monetization systems. We’ll also be looking for possible patterns of hate, harassment, and discrimination that may affect some communities more than others,” the YouTube official remarked.
The California-based online video-sharing platform reported that it has increased the number of daily hate speech comment removals by 46 times since early 2019.
“In the last quarter, of the more than 1.8 million channels we terminated for violating our policies, more than 54,000 terminations were for hate speech,” the company observed.