
WIRED
Twitter is one of the most widely used social networking platforms that receive the most scrutiny when some features don’t work like they are intended to. However, this criticism has never stopped the company from experimenting with newer features, but it has done one thing right. Very subtly, instead of just launching new features out of the blue, Twitter has begun taking user’s inputs and opinions on their upcoming new potential features that they are testing. It is a win-win situation where the company gets saved from all the criticism and usually gets honest and significant opinions.
Having said that, Twitter has been trying to protect users from negative comments and harassment on its platform with new features and tweaks. According to recent reports, Twitter is further pushing the boundaries to limit toxic replies on its platform by introducing new tools that allow users to get more proactive in filtering out negative and toxic comments.
As mentioned in a report by Engadget, Twitter is testing two new features, one is allowing users to automatically filter out toxic comments from showing up, while the other is to limit unwanted and potentially harmful accounts from replying to your tweets. Both of these features are in testing and Twitter is asking users for their insights and opinions.
These features have been made official as Paul Barcante, Senior Product Designer, Twitter released some visuals on how the feature will work. However, she confirmed that these features are merely concepts that will differ from the actual feature when and if they are released.
If potentially harmful or offensive replies to your Tweet are detected, we’d let you know in case you want to turn on these controls to filter or limit future unwelcome interactions.
You would also be able to access these controls in your settings. pic.twitter.com/ok5qXOf33Z
— Paula Barcante (@paulabarcante) September 24, 2021
To the extent where Twitter will automatically detect if there are potentially harmful accounts trying to reach out to you by replying on your Tweets, the platform’s new feature will limit that from happening. Accounts that have recently shown signs of breaking company policies will be more susceptible to this new upcoming filtering feature.
However, Baracnte is concerned over the fact that since this process will be automated with Artificial Intelligence acting in place, there could be times when it is inaccurate and filters out replies that are respectable and not toxic, it may be from a friend that is allowed to be disrespectful, but in a funny way. So, Twitter is still working on the algorithm and it has several points to figure out with these new features.
Barcante further mentioned that these features still require testing and iteration and that the platform will provide more details if it plans to officially launch these tools for the masses.