Informational Material

How will children’s rights protection in the digital environment be empowered?

Within a few days, one year is coming with the European Union’s General Regulation on the Protection of Personal Data [1]. Some can argue that the adoption of the Regulation has actually contributed to the effective protection of children’s rights in the digital environment by providing for parents’ consent to both the collection, storage, processing and distribution of the child’s personal data, as well as to participate in the Information Society. Others, on the contrary, can argue that yes, indeed, the Regulation has laid some guarantees for child protection in the digital world, however, there are still many challenges and the way towards effectively enforcing the rights of the child in the digital space is a long one.

At first place, one of the fundamental rights of the child in need of protection in the digital environment is the right to participate and the right to be heard and to take into account the child’s opinion on decision-making procedures. Although children are active on the Internet and in general in the digital domain, they do not have any place in decision making. In other words, the child does not have the opportunity to express his / her views, desires and experiences before the political decisions are made that will significantly affect his / her life.

Implementing the right to participation does not necessarily mean that a “chair” should be reserved at the conferences with politicians. On the contrary, it means strengthening the active role of the child in issues that concern them and, consequently, their digital social responsibility in a democratic society. The influence of children’s participation in political decisions also determines the degree of participation efficiency. Policy makers are not enough to consult with children but to be genuinely willing to interact with them and actually listen to their opinions, taking them seriously.

In this way, the creation of a friendly and open culture for interaction with children enhances the reduction of digital illiteracy, which is caused by discriminatory behaviours in the digital environment, such as racist, xenophobic, homophobic and sexist treatments and comments. For this reason, equal opportunities for access to digital knowledge, the implementation of training programs, and the increase of resources so that all children, from every minority group and vulnerability, have access to the necessary tools and equipment, are contributing factors for strengthening the digital literacy. However, it is worth mentioning that adults, parents and caregivers also need to be digitally trained in order to become familiar with the digital environment and the challenges that it poses.

Talking about familiarity and parents, of course, we should refer to the role of parents and caregivers. In particular, it is important that they should overcome the ideology of ‘protectionism’, over-reaction and one-dimensional decision-making, in essence, to protect the best interests of the child, thus fulfilling their primary role as parents and caregivers. They are called upon to cover the child’s physical, mental, spiritual and social needs by actually listening to their needs and desires. Children have now grown up in a digital environment. They are citizens of the internet, and parents and caregivers are also required to do parental care. For this reason, they should adapt to the digital environment, be aware of the dangers they face by asking for support from the state and civil society.

Of course, as long as they should be aware of online jeopardies, so careful they have to be with their digital behaviour. This means that parents and caregivers should familiarize children with the concept of privacy and data protection from an early age and control their irrational exposure to social media. Only in this way will the child be able to put limits on the digital environment, limit violations of their rights and protect them.

To sum up, both the states and the private sector, marketing and advertising companies, should consider children as rights holders, restrict manipulation and exploitation practices and violations of their privacy and rights. On the other hand, children should be aware of and understand the regular and misleading forms of digital marketing in order to develop critical thinking and protect their rights as consumers.

Recognizing children as subjects of digital rights significantly determines the recognition and protection of their rights as digital workers, digital citizens, digital students, digital consumers, digital patients, digital defendants. The regulation of a proper legal framework for children’s digital rights is essential for the holistic and effective protection of children’s rights.

[1] Please note that the bill on the application of the Regulation has not yet been put to the vote in our national legal order. However, due to its nature as a Regulation, its arrangements are directly applicable at the national level.

YouTube and children. A protective relationship?

A few days ago, on 18 February 2019, I noticed a YouTuber who was trying to explain how this platform facilitates the sexual exploitation of children by paedophiles. In fact, he commented that it was a fairly speculative activity in this area, as both Fortnite and Disney and other companies were advertised! I wondered how this was done and how so obvious it was that he noticed and discovered it!

Matt Watson explains that the story begins with videos of children in everyday snapshots, such as sports, games, in a pool with swimsuits.. They are innocent videos with no dangerous, obscene or pornographic content. However, they get soft porn content in the eyes of paedophiles, who comment on them, highlighting frozen snapshots, and forward them in this way. They do not hesitate to add links to child pornography material in the comments session.

For the credibility of his experiment, Matt Watson used a new YouTube account and a VPN so his search would not be affected by previous activity on the platform. The result was shocking! Both the sideline and the first page of the results were filled with video of minors in various obscene poses. In this way, it has proven that the algorithms used by YouTube referred to these videos are very simple, given that by typing simple phrases such as ‘pool girls’ or ‘little girls exercising’, videos pointing to content material and comments displayed as described above.

Headaches cause Youtube both the characterization of these videos and how to deal with them, as well as the fact that they are a speculative activity because of the high views that have been placed by advertisements from well-known and large companies.

So, with regard to their characterization, the available ways of flagging videos on the platform are about the inappropriate language used or about covering controversial issues and sensitive events. But it is not the first time that YouTube is accused of endangering children or not protecting them from exposure to paedophiles, through the comments they quote.

In 2018, a Times of London survey showed that YouTube has not been able to immediately remove live streamings with inappropriate content. In 2017, Google has been criticized by YouTube’s Trusted Flagger program, which includes child protection specialists, following their protests over lack of action and coping with the risk of sexual exploitation of children because of unauthorized photos and obscure comments.

In an unpublished interview given in 2017 to Forbes, one of the participants in this awareness campaign, he expressed his dissatisfaction with the slow pace that YouTube responded to and handled the petitions submitted. After submitting 526 reports in less than 60 days regarding the safety of children on the platform, they received a total of only 15 responses, while only 8 were screened for a violation of the European Guidelines. A YouTube representative has pledged that they will try to add a comment review feature and will hire additional staff to manage this situation.

In the same context, it is worth mentioning that the 2013 Daily Dot issue on the proportion of paedophiles who come into contact with minors through comments on YouTube. Videos that attract the interest of paedophiles date back to 2013, and they are still breeding in 2019! Millions of comments and more than 400 channels on YouTube were deleted for the unwelcome comments and obscene videos they promoted, as Chi Hea Cho, a Google spokesperson, a YouTube subsidiary company, said. Indeed, Chi Hea Cho pointed out that they reported these illegal activities also at the National Center for Missing and Departed Children.

Children who record and upload, not always, these videos are, in most cases, under the age of 13. And I’m wondering. Where are their parents? How is this material being reproduced if YouTube knows how it is being used and forwarded? Instead of repressive measures, why are not Google and YouTube taking precautions measures? Instead of deleting comments and accounts later on, why do these platforms not develop protection filters and do not implement policies that are in line with child protection rules?

Epic Games and Nestle eventually withdrew their ads to corresponding videos from both platforms. Is that enough?

Who ‘plays’ with your personal data?

Video games are a source of entertainment for both children and adults. Most video game studies focus on their content, whether they motivate violence or they are neutral, or the negative effects on gamblers. However, does anyone wonder who is hiding behind them and what does he earn from the provided services, which are usually free of charge?
Fortnite has so far attracted the interest of 125 million users, while Epic’s revenue amounts to $ 300 million. Its installation is free, and the profits are huge! For several video games companies, profits come from the process of player’s data.
Before 1986 for ‘The Legend of Zelda’ in NES, as for the rest of the console games, it was difficult, even impossible, to store the data of their players. Nowadays, video game developers use advanced IT methods such as Hadoop and MapR to collect, process and analyze Big Data to better understand players’ behaviour.
But what data can companies collect through their video games? The physical characteristics of a player, such as face’s features,  body movement and voice data, location, information gathered by the social networks to which the player is connected are some of them. Some games, in fact, contain data-gathering sensors such as the popular Nintendo Wii console and later Xbox Kinect. Thus, companies collect the biometric data of the player, such as the weight and the features of the person, needed to the provision of services and the start of the game. For example, Wii collected data relating to the player’s physical condition for the popular WiiFit exercise game.
Moreover, in addition to the above, the player’s social behaviour is monitored from the decisions he/she takes during the game, such as his/her temper, his/her leadership, his/her fears and his/her political beliefs. See for example the questions in the Catherine game. “Are you more satanist or masochistic?”, “Do you carefully choose which underwear you wear?”, “Have you ever cheated on your partner?” Catherine is a videogame developed by Atlus and concerns love relationships and moral dilemmas resulting from the camaraderie and longtime relationships. Vincent, the leading actor of the game, must decide whether to choose his wife Katherine or the charming girl Catherine. The game was released by Atlus for PlayStation 3 and Xbox 360 in Japan and North America in 2011, while for Microsoft Windows will be released by Sega in 2019.
In addition, in some video games, players should buy some products, such as in Fortnite. In this way, companies know and store the credit card or bank account details used for the payment. All of these data are used by companies to record who uses their products and how they can promote them. Already since 2005 with the release of Xbox 360 video games have been linked to players surveillance. According to Stéphanie Perotti, Ubisoft uses its customer data mainly for marketing purposes and for demographic studies designed to continually improve the products and services they offer to ensure that they meet customer expectations.
Apart from marketing reasons, however, companies can actually improve the games they produce based on their users’ data. The developer of the successful Candy Crush series has found that many users have left the game at level 65. Thus, he made Level 65 easier for users to continue to play.

So what can you do to prevent your personal data from being collected? You should be very careful when you download or install a game. In particular, check the application’s privacy policy, the terms of use and installation of the game and your consent to the collection and process of your personal data. Because companies often ask for access to your data, such as your camera or location data, and many applications are requesting access to other applications.
To the end, we notice that video games have actually been integrated into the children’s daily life and culture of young people. It is worth noting that according to a survey of natural human behaviour, 94% of parents know and pay attention to what video games their children play. Also remarkable is the reaction of young people to the negative reviews supported for video games, as was the case with the #GamerGate campaign. Particularly, #GamerGate is an online movement that appeared mainly on social media (Twitter and Facebook) in August 2014 by Eron Gjonji. This is a campaign to intimidate critics of sexism and, in general, of the culture promoted through video games.
Therefore, the solution to the risk of privacy and personal data violations is not the ban, but the information and protection.
So, the next time you will play, check out earlier … who plays with whom!

Sexual exploitation and data protection

The Greek society, after the murder of the 21-year-old Eleni in Rhodes, has, at last, faced the ‘real’ dimensions that rape can and has taken. So, rape is not just the sexual intercourse without the will of the other person. It is also the coercion, the physical violence, or the threat of imminent and immense danger that are going together with this unintentional intercourse. And not just these. Rape can also result in death, as we have seen.
However, sexual harassment is also a reality in the digital world. In which way? Lately, it can happen with the violation of personal data. In particular, surveys are showing that 4% of adolescents aged 12-17 admit that they sent sex messages that show naked or semi-naked to other users through text messages, whereas 15% of adolescents admit to having received such material. This is the so-called sexting, namely the sharing of photos and messages with mainly sexual content using applications from a mobile phone or other electronic devices. However, the exchange of these messages sometimes occurs without the consent of the person being depicted. In this case, the person being depicted falls victim to a violation of his or her personal data.
But what are these personal data? Personal data is information pertaining to a person. They may contain “sensitive” information or “insensitive” information. This information becomes personal upon the connection, either direct or indirect, with that person. It is, therefore, different information which, if grouped together, can lead to the identification of a particular person. This information, therefore, characterizes the person’s biological, physical and mental being, as well as social, political, economic and cultural being. In this regard, due to the sexual content of the message, the naked/semi-naked image of the person is considered a sensitive personal data as it relates to the sexual life of the minor user.
But how can a violation of personal data lead to sexual exploitation? Once a photo ‘appears’ on the web, it is difficult to control its circulation. Most of the time, these photos are sent within a fiduciary relationship between the sender and the recipient. So far there is no problem. Problems arise when this relationship is called into question or when it is based on false elements. The circulation of this material to the second level of promotion without the consent of the depicted, and often unwittingly, to other users constitute a breach of personal data and commits the offence of defilement when there is the purpose of committing misbehaviour but also trafficking of pornographic material.
About two years ago, 22-year-old Lina committed suicide, falling from the ninth floor of a student dormitory in Thessaloniki. Prosecution investigation has been ordered to make an electronic search for the traces of any criminal behaviour for keeping, storing, processing of personal data,
threats through the Internet for action or tolerance and association of individuals for committing a felony, as it seems that the girl was threatening to be published her personal photos on the internet.
Greek law and jurisprudence provide high safeguards. It left only to be perceived by us.

Hate speech and children: an online battle in the era of social media

Social media platforms offer a huge place for any users to express their opinion, and in the opposite to be informed. In this way, the flow of information is free and it should be free, as Unesco stated. Hate speech lies in a complex nexus with human rights, as well as concepts of dignity, liberty and equality. Its definition is often contested as too wide-ranging and open to manipulation. In national and international legislation, hate speech refers to expressions that advocate incitement to harm based on the target’s being identified with a certain social or demographic group.

According to HateBase, in the majority of hate speech cases, individuals are targeted on the base of ethnicity, nationality, religion and recently class. A framework that can identify a hate speech act could be based on: i) the character and popularity of the speaker ii) the emotional state of the audience iii) the content of the speech act itself as a call to action iv) the historical and social context in which the act occurs and v) the means used to disseminate it, including the type of language adopted. In this point, anonymity plays an important role, since it can also present a challenge to dealing with hate speech online.

 Apart from International Convention on the Elimination of all forms of Racial Discrimination for race-related speech and articles 19 and 20 of International Convention for Civil and Political Rights for the protection of hate speech based on religion and sexual orientation, the UNCRC, and its Protocols, are the main and basic framework for online and offline child protection. Furthermore, the “Budapest” Convention, the Additional Protocol to Council of Europe Convention for the Cybercrime, entails an extension of the Cybercrime Convention’s scope, including its substantive, procedural and international cooperation provisions, so as to cover also offences of racist or xenophobic propaganda. To the end, Council Framework Decision 2008/913/JHA states the European legal framework on hate speech, blasphemy and its interaction with freedom of expression and highlights the national co-operation of member states, mainly on articles 1(1), 3 and 4.

But apart from that, the right to freedom of expression needs protection, which is not, and cannot be absolute. The public good requires its limitation. As the European Agency for Fundamental Rights repeatedly points out, limitations in the exercise of freedom of expression should be a legitimate aim of a protected ground, which is justifiable by necessity, legitimacy and proportionality. This legitimate purpose could be national security, public morality or public health. The principles of legality, necessity, proportionality and non-discrimination are referred to in the United Nations Human Rights Committee’ General Comment 34 on Freedom of Opinion and Expression.

 In this way, we realize that there is a juxtaposition between human rights in the context of hate speech. Particularly, regarding the specific regime on child protection, which is recognized by EU, article 2 of UNCRC about well being of child opposes to article 13 UNCRC about freedom of information. Thus, freedom of expression and speech may appear to conflict with protective measures limiting access to certain sorts of online material or activities. The developmental process of the child is of great importance and has consequences for children’s capacity to identify, assess, and manage potential risks. Despite this battle, child protection and freedom of expression both share a deep-seated belief in the importance of protecting basic human rights, grounded in fundamental values of human autonomy and dignity.

 To prevent and counter the spread of illegal hate speech online, in May 2016, the European Commission agreed with Facebook, Microsoft, Twitter and YouTube  a Code of Conduct ( #No place for Hate) to help users notifying illegal hate speech in these social platforms, improve the support to civil society as well as the coordination with national authorities. The four platforms agreed to assess the majority of users’ notifications respecting European and national legislation on hate speech, and committed to remove, if necessary, those messages assessed illegal. Thus, IT Companies are obliged to encourage the provision of notices and flagging of content that promotes incitement to violence and hateful conduct at scale by experts.

The four Companies agreed also to further work on improving the feedback to users and being more transparent towards the general society. In the course of 2018 Instagram, Google+, Snapchat and Dailymotion announced the intention to join the Code of conduct. This Code of Conduct, of course, would also enable the Member States, and in particu,lar their law enforcement agencies, to further familiarize themselves with the methods to recognize and notify the companies of illegal hate speech online.

However, given that all risks are not equal, different risks may require different actions. One useful distinction between risks was provided by Sonia Livingstone, who identified four types of risks: commercial, aggressive or violent risks, sexual or sexually harmful risks, and values-based risks, such as hate speech. These risks are further distinguished by suggesting that children could encounter each type of risk in one of three ways: as recipients, as participants and as initiators. Both points stress the importance of precision in policymaking.

According to Press Release of 1st of March 2018, the breaches of the rule of law are becoming more recurrent in the European Union, thus the Members of European Parliament insist that article 7 of the Treaty “should no longer be regarded merely as a hypothetical tool”. They also reiterated their call to the Commission to establish a European mechanism for democracy, the rule of law and fundamental rights.

The parental role is of great importance as well. It is estimated that about 60% of European children are daily internet users, and therefore they are considered to be “digital natives”. The use of this “digital natives” concept is misleading and poorly founded, and is based on the assumption that children are quick to pick up new technologies. A recent EU Kids Online study invalidates that even though 43% of surveyed children believe to know more about the internet than their parents, they lack digital skills, such blocking unwanted communications, changing privacy settings on social media, critically evaluating information and changing filter preferences. One brilliant example could be the twich live streaming application, which is very know to children, but its hidden risks are not.

The European Commission seems to be in favor of legislative measures providing for a stronger legal protection of children’s personal data in the online environment. In article 6 and 8 GDPR, the Commission introduces verifiable parental or custodian consent that serves as a means of legitimizing the processing of a child’s personal data on the internet. Article 8 foresees that parental consent would be required in cases where the processing operations entail personal data of children under the age of 13. The age of 13 would be the bright-line from which the processing of children’s personal data would be subjected to fewer legal constraints.

In practice, this would divide all children into two groups: children that are capable to consent to the processing of their personal data from 13-16 years old and children that are dependent on parental approval of their online choices from 0-13 years old. Drawing such a strict line opposes the stages of physical and social development. Also, it requires the reconsideration of the generally positive perception of the proposed parental consent from a legal point of view. In particular, it is necessary to evaluate whether the proposed measure is proportionate and whether it coincides with the human rights framework.

The responsibility to consent opposes in under certain circumstances to the idea of children’s participation in the decision-making process that concerns them, an idea anchored in the UNCRC and that is recognized by both the EU and its Member States. Children’s rights to freedom of expression and privacy may be undermined in a case when children’s access to information could become limited and dependent on parents. Also, the scope of their right to privacy would shrink as parents would be required to intervene in children’s private spaces to make informed choices, for example in gaming account. Therefore, it can be observed that the parental consent may sometimes contradict with the key principles of human rights law enshrined in the UNCRC.

Moreover, in Recital 58 of GDPR there is an extent provision regarding the special child protection regime. In particular, it states that any information provided should be concise, easily accessible and easy to understand by a child, in a clear and plain language. In this way, GDPR recognizes children as a “vulnerable” group of consumers, which requires special protection. Last but not least, we are convinced that freedom of media and media ethics should be protected.

To conclude, the famous social media platforms enjoy the strong monopoly in the market. Their business model is based on the processing of personal data of their users. A big and really active part of these users are children, who get addicted to the presence of these big companies in their daily lives and develop a strong consumer relationship with them. The existence of these Codes of Conduct is really important because they complement the existing legal provisions and offer a high level of protection. Of equal importance is the use of social media for the personal and social development of children. Thus, it is crucial to strike a careful balance between the freedom of expression and child protection. Even though to find this balance is not an easy task, the collaboration between the related stakeholders has been proven to be beneficial. Policy makers, civil society and these social media platforms should agree upon a common approach on child protection regarding hate speech on social media. The existing case law and legal provisions will be the very basis on this collaboration. We should not forget the consequences of hate speech on social media, such as the Amanda Todd story. We hope that these consequences will be limited after the adoption of the new amending Directive 2010/13/EU about counting hate speech online (12 October 2018).