When considering mental health issues linked to online behaviour, our thoughts may turn to cyberbullying, but there is another concern far more formidable than we might realise — access. Search engines have given us easy access to more and seemingly ever-fresh sources of content that have the potential to be just as detrimental, if not more so, than cyberbullying. Of course, search engines, along with personal computers, the internet, and the World Wide Web, are not inherently bad, but there is reason to be cautious. While we have seen great leaps in their development and functionalities, have we, as technology users, understood the accompanying risks?
Paradoxically, while one of the most significant areas of progress in computing relates to search engine algorithms, some of the most concerning issues are rooted in them. With greater use of search engines, their design evolved primarily around deep learning, location and more data processing power. This combination has made them more powerful, making it easier for users to find the requested content. However, it has also increased the opportunity for unwanted or harmful content to appear or be requested, potentially disturbing the user.
Early search engines
The first web search engines were built in the 1990s after Tim Berners-Lee’s successful proposal for the World Wide Web. Most of their crucial development was done in the ’90s; however, modern search engines, such as Google, are now self-optimized, with algorithms tuned in real-time and daily improvements to the user experience to suit the “modern” user.
Before Google Search became the preferred search engine, there was another giant, Yahoo! Search. Founded in 1994, Yahoo! was one of the web pioneers, offering a hierarchical directory of websites organised by category — Yahoo! Directory. At first, Yahoo! Search could only search this directory; later, it started using its own web crawler to search the web and served up results from other search engines, like Google and Bing. One of the reasons that Yahoo stumbled was that it prioritised old, trusted websites rather than new and more relevant ones. In contrast, Google brought fresh content to its users, making it increasingly popular.
Over time, search engines have become even more elaborate and mobile. Users, many of them children, have the entire web at their disposal at all times. This also means the potential to access or receive inappropriate or harmful content is very high. The internet is an expansive place that allows various groups and communities to meet and scale their influence for good and bad.
Not your neighbourhood library
One thing may lead to another, and a child, minor or even an adult might stumble upon, attract or deliberately view content that might be harmful to them. This issue has been present since the creation of the first search engines. As a growing problem, risky content is readily available on social media, online forums, websites and ads.
Back to the evolution of the search engine. To better serve users, search engines and social media alike started using predictive search and monetising it; as such, the algorithms leveraged by search engines began to locate content and suggest it. Large social platforms and search companies employ these developments to drive profit via ads (for example) but also to “feed” users content that has the potential to (artificially) broaden their interests. In this manner, search behaviour informs the users’ “for you” or “suggested” pages. This can be particularly problematic for children and young adults, whose interests and personalities may not fully form. This pattern also opens up children and their interests for immediate and future monetisation.
When “search” gets personal
Parents and educators must be aware of the dangers awaiting minors online and educated enough to help them. To highlight how direct a correlation there is between behaviour-based search and the provided results, let’s consider how easily a “What I eat in a day” video may land you on a pro-ana (pro-anorexia) online forum, a thinspiration (thin inspiration) message board, or even a thread full of self-harm tips or other explicit content.
Social issues and technology have evolved to a point where stopping them are difficult. Algorithms work tirelessly to bring users the content they calculate they might enjoy and interact with. Therefore, we have to do everything in our power to protect children, minors and ourselves.
This issue has now captured the attention of popular media and some governments who recognise the danger this brings. The story of Molly Russell was one of the first to bring the issue to the light of the day and get people talking. Even though large social media platforms endeavour to protect their users, efforts certainly lag behind rapid development in business and technology.
Some states have taken it upon themselves to protect the most vulnerable. In early March 2022, lawmakers in the US state of Minnesota set out to pass a law prohibiting social media platforms from using algorithms to suggest content to anyone below 18 years of age. However, this initiative has met opposition. Tech industry lobbyists claim passing the bill would violate the First Amendment, preventing companies from recommending helpful content to users. It would require companies to collect more data on their users. Another argument in opposition is that the law, however well intended, would undermine parental choice and restrict access to valuable technologies.
A toolbox for prevention
Kids want to spend time on the internet but should not be wholly unsupervised. A great tool to help you keep tabs on your child´s behaviour online is Parental Control. In addition to providing limits on how long your child can access certain apps and websites, it can also block specific content types and URLs for PCs and mobile devices alike.
Web Guard is one of the best features of ESET Parental Control, found in ESET Smart Security Premium. Since websites can be categorised according to keywords, Web Guard blocks categories it deems inappropriate for your child’s age group. Of course, adult sites featuring pornography and gambling are blocked for all age groups. For Android devices, there is even a Safe Search feature that filters search engine results so that you do not have to worry that search engines suggest inappropriate content your child is not ready to view. You can also manually blocklist websites and apps you deem inappropriate for your child. The same applies to allowing appropriate resources.
Whether you start using Parental Control, an even more critical task remains: educating yourself about the content that is on the web and having regular conversations with your children about the online and offline world. Talking to your children is one of the best tools you can give them to protect themselves. Education on any subject should start in the family, especially for personal and private topics and our online presence.
Children and minors deserve to be treated with respect and educated about the choices we make about or for them. Talking to them about their online behaviour may make them feel like we are invading their privacy, so be sensitive and make sure they feel heard and understood.
To learn more about safety online for children, visit Digital Matters, a free online safety education tool created in partnership with Internet Matters. Internet Matters is a not-for-profit, industry-funded members body that helps families stay safe online, providing resources for parents, carers and educational professionals. The organisation works with partners from across the industry, government and third sector to raise awareness and provide advice on the issues affecting children in the digital age, including cyberbullying, screen time, digital resilience, extreme content, privacy and exploitation.