At Hacktech 2019, I intend to build a functioning prototype of the Guardia-NN system.
The hardware prototype system consists of the following:
A hardware device that features a huge button (representing “I DON’T WANT TO SEE THIS”) and an indicator (representing “YOU SAID YOU DON’T LIKE STUFF LIKE THIS”).
The device solves two problems at once: Firstly, people don’t like to see things that offend/traumatize them. But secondly, people generally don’t like being told by others (who may not share the same views and values) what is and is not offensive or triggering.
In other words, the machine serves as a guardian, using neural networks (hence the name Guardia-NN) — and replaces the almost universally reviled trope of the overly politically-correct “social justice warrior” with a machine. Now that’s automation we can all get behind!
It performs this task by picking up on user-defined trigger warnings (text tags that indicate that content will cause distress, offend, or otherwise bother the user), and relies on neural networks (large chains of decision making procedures) in order to learn from from repeated use what the end-user does and does not want to see. The device warns users that content ahead may trigger them, and allows them to navigate around said content with minimal exposure.
Because of its limited computational power, and reliance upon user interaction, it does not represent a feasible means of censoring others’ internet use. Rather, its intended use is to allow people to have a warning sign before exposure to content that may cause psychological harm and thus mitigate their exposure, akin to an allergen warning system.