Risks Associated with App use

Predators and Grooming

This is a very sensitive subject, and as such must be handled delicately. We must weigh the dangers associated with malicious online users, and the devastating results of their online interactions with children, against the potential harm any story we may report on might have on those who come to our blog to learn.

We recognize that having to relive a trauma that a user, or their child have gone through can have a huge impact, however we must highlight some of the risks involved with unfiltered online activity. We shall therefore, attempt to use online incidents where victims are not named, and the cases have a sweeping scope and impact.

Two specific cases have been per-selected which we feel highlight the risks posed to all age groups regarding both grooming, and the availability of personal images and content due to data sharing restrictions and precautions not being employed.

Case 1

In his article entitled ‘Ten per cent of online child sex abuse images feature infants’ Conor Gallagher reports on the 6th of February 2018, for The Irish Times online, that one in every ten online child abuse images detected in Ireland depicts children under 3 years of age. A statistic we feel will be shocking to most parents.

The article goes on to discuss how Hotline.ie received 7,141 tip offs about suspected child abuse imagery. Most of the cases that were reported, were by people who simply stumbled across this type of imagery showing that it is widely and publicly accessible. The former senator and child protection advocate, Jillian van Turnhout is quoted as saying: “People think of the victims as being teenagers and upwards. There is no age too young for these images. Europol has dealt with images with an umbilical cord”.  

The article goes on to breakdown confirmed online child abuse material reported to hotline as follows: 10% fell into age group 3 years or younger, 61% fell into the age group 4-12 years of age, and 29% fell into the age group 13-16 years old. However, hotline recently updated their data set which highlights, and even more worrying set of statistics as seen below.

 

Figure 1

The above chart, taken from the 2017 Hotline.ie annual report, details that out of 7,591 cases reported to Hotline, 5,789 constituted child sexual abuse or exploitation investigations. 30 countries worldwide where confirmed as finding child sexual imagery hosted online. These latest figures reveal that for the first time in 8 years, none of the hosted material was traced back to Ireland. 1 out of 8 years! A very worrying data point indeed.

1 in 5 of the websites investigated by Hotline.ie where revealed to be disguised websites solely designed to distribute Child Sexual Abuse Imagery. Of the sexual abuse imagery investigated, 53% appeared to depict rape and sexual torture and 79% appeared to depict children as young as newborn’s, and as old as 12. These are truly horrific statistics.

There are two very important things that should be obvious here, which the report does not mention but which we feel should be taken from these figures. Firstly, a demand for such despicable content exists, and secondly, that demand is being met. Therefore, ensuring a reduced visibility of children’s content and accounts online is fully justified and demanded.

Lastly this article serves to further support the findings of the 2018 CyberSafeIreland annual report, that all age groups of children are exposed to these risks.

On the very same day as the previous article was published another article was published, once again online for the Irish Times, this time by Conor Lally, entitled ‘Gardaí find 70,000 child sex images on man’s computers’, reports that Gardaí had discovered 70,000 images of child sexual abuse on computers alleged to belong to a man targeted in a series of raids . The raids were conducted over a wide range of counties including, Dublin, Meath, Wicklow, Wexford, Carlow-Kilkenny, Laois, Kerry, Waterford and Kildare. The operation which was coordinated with international agencies including the FBI’s child exploitation unit and law enforcement agencies in Canada was code-named Operation Ketch III and Gardaí estimated that along with the 70,000 images recovered, up to 150,000 images went UN-recovered.

This report reveals that Ireland’s Child sex abuse problem is part of the larger international problem and the scope of the problem is wide ranging, reaching relatively quiet and sparsely populated counties, respectively, when compared to large, densely populated counties such as Dublin. This confirms the problem is national, not localized and if there was any doubt of the dangers of online grooming they are surely laid to rest by these two articles alone. However, sadly, as anybody who has dared to investigate this subject material can attest to, this is unfortunately just the tip of the ice berg.

Case 2

Now we shall examine a particularly sensitive case. The case relates to the grooming of a 14-year-old girl by a Canadian national named Joshua Robert Tremblay. We shall not name nor touch on any personal specifics pertaining to the victim in this case for obvious reasons, but Mr. Tremblay has since been tried and convicted.

As reported by Anne Lucey on the 13th of April 2017 for The Irish Times a ‘Canadian who traveled to Ireland for sex with girl (14) sentenced’. Mr Tremblay was found to have contacted the young victim through social media. Canadian police, having investigated the case found that not only had the man sent nude pictures of himself to the victim, but he had travelled to Ireland and had intimate relations with the child on several occasions.

The specifics of this case are truly horrifying and even just investigating the story proved very difficult for our group. In this case, the perpetrator managed to successfully manipulate the young victim into agreeing to have intimate relations with him. Luckily this did not affect the Canadian Crowne Prosecution Service from trying the accused. This case was so reprehensible in the eyes of both Irish Gardaí and Canadian police, that though no specific treaty exists which forces Canadian police to co-operate with Irish Gardaí, laws do exist that allow Canadian Authorities to try a Canadian citizen at home for sex crimes committed in foreign states; and in this case the Canadian police forces co-operated to the fullest of their extent. They ultimately did succeed in securing a conviction against Mr. Tremblay though the extent of the sentence is not listed as court records are not released to foreign nationals.

Our group purposefully selected this story as it was felt that the stories that are frequently seen on Facebook, of both Irish Nationals, and foreign nationals, being ‘trapped’ while attempting to groom what they believe to be young online user, might hit a little too close to home. But also, because we wanted to highlight to parents that it is not just malicious online users from Ireland they need to be concerned about. Some of these disturbed individuals are willing to travel thousands of miles to pursue the grooming of a child victim. Constant vigilance, monitoring and a proper understanding of the best safety practices where online security is concerned, should always be employed. We once again reiterate that we felt we could not ignore this area of research due to the risks faced by children online. It is certainly not the purpose of this report to scare parents, but parents must also recognize the danger is very real.

Risks of Unfiltered Access to Snapchat

The minimum age for Snapchat use according to their Terms of Services is 13 years of age and should be respected. Legally speaking if a child under the age of 13 is using Snapchat with the consent of the parent, the parent assumes responsibility for any harm done.

The Discover feature, as mentioned above, on Snapchat has very adult themes. An example would be a content creator called ‘Brother’ which discusses relationships and sex which are obviously very adult-themed. In one of their posts they feature a video of a girl undressing despite being filmed without nudity being show, in the eyes of an adult, it may be inappropriate for children. Examples are listed below which have been displayed on a test account create by our group where the age was set to 15 years old. To specifically test if these content creators were censored when using an account with its age set to under 18 the following still came up.

Figure 2

Snapchat unfortunately faces the same issues many other social media applications face; child predators on its platform. The police force in Humberside in the United Kingdom reported a total of 50 offenses of sexual communication involving children between April 3rd, 2017 and January 3rd, 2018. Seven of these happened on Snapchat. In 956 cases of child grooming through social media, 20.1% of them was on Snapchat. 

Usually a snap will be deleted after a set period leading user to a false sense of security that the snap will be deleted. Children think this and often send silly or inappropriate pictures thinking they will be deleted. What some users don’t know is that despite their snap being deleted from Snapchat, a recipient can screenshot or record what they sent and keep it that way.

As previously mentioned, Snapchat’s SnapMap feature allows users to track other users position in real time. This feature to track users is perhaps a very scary thought, particularly for parents. Combine this with the knowledge that predators are using Apps to groom children may lead to the question as to whether Snapchat’s age restriction should be raised, or children banned from it entirely. However, there may be another, less restrictive, way of keeping children safer.  Parental guidance of children about safety practices coupled with the proper modification of in App settings would increase App Safety. We hope our platform will help with this and we will cover guidelines to reduce the online footprint of Snapchat users in a later section.

There are also “Hacked” versions of Snapchat which give those who use it special features like Geographic spoofing which fakes your location on the SnapMap, disabling the screenshot log which tells users who screenshots their snaps and other features. Snapchat states clearly in their terms of service that the use of these Apps is not permitted and whoever is caught using them will be banned. Despite this, it does not prevent predators from using it. 

The Our Story feature on Snapchat is also potentially dangerous as children can add to the map their story and anyone can see it without being on their friends list. Then a predator could move to that location and add a snap to Our Story to start a dialogue between them.

Risks of Unfiltered Access to Instagram

As indicated by a graph from Statistica.com , figure 3 below, there are currently one billion monthly active users on Instagram as of June 2018 Out of those numbers, the main user groups are 18-24 and 35-44.

Figure 3

It also shows that 7% of the active userbase are in the 13-17 range, meaning approximately 70 million are within this age range. It is also reported that 52% of teens use Instagram.

There are also approximately 25 million businesses on Instagram as of 2017, with 8 million business profiles on the platform. Business are using the platform to promote their brands through posting pictures and videos of their content. With these there are 2 million advertisers, up from 1 million back in March 2017, with mobile advertising revenue of approximately 7 million dollars.

There is also a large amount of celebrities that use Instagram to promote their ‘Brands’ and to engage with their fans. Many celebrities often use the platform to promote their own brand, however many are also paid by companies to promote a brand to their fanbase, it is often not clear that they are being paid or just like the product.

Figure 4

As seen above in figure 4, we have a celebrity advertising a product on her personal page, and the only way to tell it is a paid advertisement is via the hash tag #ad displaying in the inline text with in the pictures tag.

The last data breach on Instagram took place in September of 2017, in which six million users had their phone numbers and emails exposed, these details ended up being sold on a searchable database known as ‘Doxagram’, selling for $10 a search. The database has since gone offline but is most likely in the possession of the hackers, and likely available on the dark web, this was confirmed by CEO of Instagram on the official Instagram Tumblr, however the exact numbers is not mentioned in this article.

There was another data breach that took place in December 2018, however it was a Facebook data breach and would only affect accounts that use the Facebook Log in feature. 

On Instagram there is a noticeable amount of inappropriate content found on the platform. Firstly, inappropriate content can be found on the Instagram TV (IGTV) feature in which, by accessing via an account were the age is set under 18 years old (15 years old to be specific) the first clip found was a woman stripping and changing on camera. This could be inappropriate for children of this age.

Below in figure 5, is an example of a very quick search, which lead us to an account containing pornographic material of a very explicit nature.  Finding this material was as simple as searching the word “hot” in the search feature.

Figure 5

There is also a known phenomenon of Instagram models, that post suggestive photos on the platform, to garner likes and attention, they use this attention to try sell naked photos to individuals. These are easily accessible from anyone that has access to the platform.

There has also been a rise in fake celebrity accounts that are often used to groom children. They pose as these celebrities to try get in contact with underage users. To combat this, Instagram has a verification service, which enables celebrities to prove that the account is their own. Verified users will have a blue tick beside their name. If it doesn’t have a tick, it’s is likely a fake account. As seen below in figure 6, the real Donald Trump account on the left is verified with a tick, yet 3 fake accounts still exist.

It is also worth noting that if a famous celebrity can be faked like this , despite the  verification feature, it is obviously far easier to fake a non celebrity profile.

Figure 6

Risks of Unfiltered Access to WhatsApp

At the time of researching this blog, one of the biggest scandals is the internet phenomenon known as Momo, or the Momo Challenge. For context, the Momo challenge, is alleged to be an App based text challenge designed to elicit information from the user, and which also allegedly encourages users to self-harm, the Momo challenge is alleged to be a viral game that is said to be shared on messaging services like WhatsApp.

The premise of Momo is that when a child messages Momo, they are allegedly sent creepy messages and are ordered to perform violent acts, these acts escalate into even more extreme acts, until the child is ordered to commit suicide .

There have been stories claiming that the terrifying images of Momo has been appearing on videos of Peppa Pig(a well-‘known child’s animated show) and game-play footage of Fortnite ( a popular video game among children) posted on YouTube, and as stated on the Vox report documented in our research thesis (available to download here), it is now alledged that the challenge has spread to Snapchat.

However, a thorough search of YouTube reveals no evidence of these videos existing.  Google, being the parent company of YouTube. has also released an official response on their own support website about the Momo challenge in which they stated they have had no evidence of any videos promoting the Momo challenge, and that such videos are against their policies, and that they encourage any incidences of the Momo challenge to be reported to them immediately. Videos that do discuss, report or educate about Momo on their platform are still permitted, and with it, images of Momo are allowed within the thumbnails of videos however the image is not allowed to appear on YouTube Kids (Julia, Google Employee 2019). STo confirm this, we tried to find Momo on the YouTube Kids app, however our attempts were unsuccessful.

The challenge is linked to is a statue called “Mother Bird”, which was created by Japanese artist Keisuka Aisawa, the sculpture originally had nothing to do with the challenge, and in fact predates the challenge by two years, upon learning that the sculptures image was being used for such intents, he promptly destroyed the Mother Bird sculpture, stating in an article by The Sun, that he felt “responsible” for terrifying children with his work, but also emphasizing that ‘Momo is dead, she doesn’t exist and the curse is gone’ .

What lead to the growth of the Momo challenge was that it was suspected to be connected to the suicide of a 12 year old girl in Argentina in august 2018 , however after several days of researching, we were unable to find any reputable evidence available online, this did not stop the media hype from escalating and spreading globally at a tremendously fast rate, Despite the fact that no document-able evidence that supports the alleged offending material.

Following on from the alleged tragic Argentinian suicide, the challenge is said to have spread further to Mexico, with an official newsletter published by the Office of the General Prosecutor of the State of Tabasco, warning citizens about “El Momo”.

This highlights a worrying trend, with the story appearing to be spreading rapidly, despite no evidence being presented of the existence of the alleged offending material., This however does not prevent official sources from presenting warnings, which treats the allegations as fact.

The fear surrounding Momo made its way to the United Kingdom and Republic of Ireland in the beginning of 2019, with schools, police services and Governments globally issuing stern warning to parents, A prime example being the Police Service Northern Ireland issuing a warning about the “Dangers of Momo”. However, as stated above, these warnings are unfounded and are based on hearsay.

The story was elevated by tweets warning about the scandal, even famous Instagram star Kim Kardashian posting about the subject on her Instagram story.

Figure 7

This only succeeded in adding more unjustified hysteria into the minds of parents. It wasn’t until organizations like The UK Safer Internet Center, CyberSafeIreland and many children’s charities began denouncing the Momo challenge, calling it “Fake News” that the hysteria began to fade. Many of these organizations stated that in fact these news outlets were causing more harm than good, due to the large amount of false data being spread around, causing the hoax to grow on its own. After this, many news outlets began to publish articles about how the Momo challenge was a hoax based of legitimate research.

Through researching of this paper, it is very obvious that a lot of the hype of this phenomenon was due the vast amount of parents who do not have an understanding on how to protect their children on the internet and to the mass amount of un-cited news articles, many of which were producing sweeping facts, like the mirror reporting how the challenge supposedly was linked to 150 suicides in Russia, even though that was from another very similar challenge, known as the Blue Whale Challenge, that was also proven to be a hoax. All of this culminated in the Momo Phenomenon becoming larger than it originally was and highlighted a necessity for parents and children to be educated more on Internet safety, and how to implement security features on a device that is used for a child. But most importantly it highlights how parents must try to leave an open dialogue between themselves and their child when it comes to Internet safety, so that if there was a dangerous situation taking place online, be it hoax or not, then the child would feel safe in confiding with a parent.

Risks of Unfiltered Access to YouTube/YouTube Kids

The most obvious dangers of exposing children to unfiltered YouTube content would be allowing them to engage with strangers online through the comments section where anyone can inspect your profile and gather information about the user and see videos they have publicly uploaded.

However another issue that is likely less known to parents refers to an incident known as “Elsa-gate”. Elsa-gate is a controversy to do with videos suggested by YouTube for kids and even on the YouTube Kids app which contain themes that are extremely disturbing and inappropriate for children as seen below in figure 8. The videos could contain things such as violence, sexual situations, drugs and alcohol and much darker situations. The videos may contain popular Disney characters or people dressed up as them part taking in these acts.

Figure 8

The reason these videos were suggested as suitable for children is because of the tags the uploader’s used when adding the content such as “education” and give them titles with Disney characters in the title. Even though this abuse of the terms & conditions, the YouTube algorithm thinks these videos are made for children and sometimes these videos slip through the cracks in YouTube’s Child Safety Algorithm. Above, is a screenshot taken from YouTube of these inappropriate videos . You can also see just how much views these videos are getting, ranging from 3 million to 12 million views.

As of researching this document a new scandal has come to light. This scandal does not pertain to inappropriate videos but comments on innocent videos. This scandal highlights the danger of children uploading their own videos on the YouTube platform. On the 17th of February 2019, the YouTube Channel “MattsWhatItIs” posted a video entitled “YouTube is Facilitating the Sexual Exploitation of Children, and it’s Being Monetized (2019)” . Matt Watson, the owner of this YouTube channel, talks about how YouTube is being abused and made to be a source for child abuse. He goes on to talk about how children are innocently uploading videos doing gymnastics or showing off their new clothes they bought. Although the videos are uploaded innocently enough, in the comments are found suspicious timestamps. A timestamp on a video allows commenters to reference a specific time in the video and allows users to click on the timestamp and view the video at that specific time. In Matt’s video he goes on to talk about how pedophiles are using timestamps to select specific times in the video where children are in exposing positions that may display partial nudity.

In his video he goes on to show that not only are pedophiles abusing the comment section to share moments in kids’ videos where they are in exposing positions , but that if you click on a few of these videos in succession then you will be recommended more of the same video that contains children in exposing positions allowing pedophiles to browse these videos easily and search for more.

The reason YouTube is recommending these videos is because of the Recommendation Algorithm  which recommends users videos based on their watch history. YouTube have since responded and adjusted their algorithms to reduce this abuse of their algorithm .

This shows us that there is very unsuitable content for children on YouTube both creating content and viewing content and leads to the debate should children use it at all?

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.