Digital Safeguards - Safeguards decision-makers need to put in place to ensure the respect of values, ethics and norms in the digital space (e.g. EU policies, regulation, etc.)

Total Questionnaire Statistics:

Total Responses:

Statistics per question:

Cybersecurity - The protection from hackers, fraud, viruses etc. and managing risks of hybrid attacks - by state and non-state actors - through a mix of cyberattacks, damage to critical infrastructure, disinformation campaigns and radicalisation of the political narratives.

Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
Any data that is not objectively needed must not be collected by the state or companies. Any incidental data has to be deleted immediately if no reason for storage can be named. The dissemination of illegally or wrongly obtained data as well as data abuse has to lead to appropriate penalties. Data protection laws have to include the fact that legal access to data does not allow for junction of data. A violation of e.g. the EU GDPR by security agencies has to be transparently and publicly accounted for. Anonymous mobile communication has to be protected legally and technically. The right to be forgotten has to be implemented. This has to apply to all people independent of their citizenship.(6) Surveillance by the government has to be punished by law. The global trade with surveillance technology has to happen in compliance with the rule of law and be properly controlled by an independent international agency. Whistleblowers have to be protected. No one brave enough to publicly show hidden grievances should be discriminated. IT-systems have to be safe, thus private and public infrastructure has to be better protected by i.a. a security by design regulation of IT-technology. Governments and companies need to adequately protect citizens and customers against malware and hacker attacks. Not everything that can be connected to the internet should and has to be. In order to not endanger the right to privacy and freedom of the internet, decisions on internet safety measures have to be justified publicly or towards democratically elected decision-makers.
Citizens should be provided with basic skills to identify and protect themselves from potential cybersecurity threats. The skill-building should be accessible for all citizens, including persons with disabilities.
EU programmes for citizens should enable funding of programmes offering free informal learning about cybersecurity for individual end users. These programmes should cover lack of popular understanding of basic digital/cybersecurity, awareness of types and methods of cyber crime, cybercrime prevention and lack of cybersecurity communication.
The protection of critical infrastructure is of vital importance. This has to be done in democratic way without using protection as an excuse to intervene into individual privacy. Cybersecurity cannot be left to "the market" and to private users, producers must be held liable for lacks of security.
Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
Any data that is not objectively needed must not be collected by the state or companies. Any incidental data has to be deleted immediately if no reason for storage can be named. The dissemination of illegally or wrongly obtained data as well as data abuse has to lead to appropriate penalties. Data protection laws have to include the fact that legal access to data does not allow for junction of data. A violation of e.g. the EU GDPR by security agencies has to be transparently and publicly accounted for. Anonymous mobile communication has to be protected legally and technically. The right to be forgotten has to be implemented. This has to apply to all people independent of their citizenship.(6) Surveillance by the government has to be punished by law. The global trade with surveillance technology has to happen in compliance with the rule of law and be properly controlled by an independent international agency. Whistleblowers have to be protected. No one brave enough to publicly show hidden grievances should be discriminated. IT-systems have to be safe, thus private and public infrastructure has to be better protected by i.a. a security by design regulation of IT-technology. Governments and companies need to adequately protect citizens and customers against malware and hacker attacks. Not everything that can be connected to the internet should and has to be. In order to not endanger the right to privacy and freedom of the internet, decisions on internet safety measures have to be justified publicly or towards democratically elected decision-makers.
Citizens should be provided with basic skills to identify and protect themselves from potential cybersecurity threats. The skill-building should be accessible for all citizens, including persons with disabilities.
EU programmes for citizens should enable funding of programmes offering free informal learning about cybersecurity for individual end users. These programmes should cover lack of popular understanding of basic digital/cybersecurity, awareness of types and methods of cyber crime, cybercrime prevention and lack of cybersecurity communication.
The protection of critical infrastructure is of vital importance. This has to be done in democratic way without using protection as an excuse to intervene into individual privacy. Cybersecurity cannot be left to "the market" and to private users, producers must be held liable for lacks of security.

Artificial Intelligence - An AI that is ethical and that protects people, communities and society from the escalating economic, political and social issues posed by AI.

Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
Algorith-based technology, including artificial intelligence (AI), has proven to exhibit biased outcomes due to the underlying data or social context. Human judgment is still needed to ensure AI supported decision making is fair. If needed, AI development and usage have to be subjected to much more rigorous public discourse and proper regulation in order to ensure its usage for the common good. The public interest should always be the main driver for public funding of AI. To ensure this, only open source software AI projects should be publicly funded.
Through the EU AI Regulation, the EU must ensure sufficient safeguards to protect citizens from any negative impact by AI technologies on their fundamental rights. Ensuring privacy, accessibility, and non-discrimination in the context of AI deployment remains to be addressed, as well as strong governance measures for enforcement of the Regulation. In the meantime, proactive regulatory actions should be taken to promote AI that will bring tangible benefits to citizens, for example promote the development of AI-based assistive technologies for persons with disabilities, or ensuring diversity of AI development teams.
Artificial Intelligence and any other digital innovation should tackle, or at least not reinforce, existing discrimination against vulnerable groups like migrants, people facing homelessness and social exclusion, or people with disability
1. Human decision-making and oversight is essential to an AI-powered future, including through audits and risk assessments across the various uses of AI. 2. Decisions on red lines for AI applications - such as facial recognition - should be made in a participatory and inclusive manner, for instance through citizen juries, to ensure that today’s inequalities aren’t replicated and deepened by AI.
• Maintain public supervision and monitoring over how AI is implemented, identifying discriminatory outcomes, preventing and mitigating discrimination risks • Address AI national strategies to include a human rights perspective • Ensure transparency over how AI is implemented, how data is collected, who is managing the AI solutions, how are the systems evaluated for discrimination • There must be a greater effort in diverse recruitment to ensure that the diversity of programmers working on AI can better mitigate the possibility of embedding underlining biases in the machines • The development of AI must be based on extensive research on biases, discrimination and human rights, to ensure that the machines are not trained to exacerbate inequalities in society • The AI systems must have incorporated in them functions to evaluate the respect of human rights in the way they are being used • Multiple stakeholders have to be included in designing AI solutions and deciding on how to use this in society. This implies relying on minority groups and on CSOs working in various domains, to ensure a broad view of how the AI could be ethically used.
1. Human rights impact assessment: A thorough, inclusive and transparent human rights impact assessment must be the starting point for all subsequent regulatory actions of any AI system. The findings of the HRIA should be integrated in the AI developer’s or user’s activities and products, as part of broader human rights due diligence. 2. Right to redress: An effective right to redress for affected groups should be added to all relevant EU AI legislation with meaningful support (including adequate resources) to stakeholders so that they can fully exercise this right. 3. Meaningful stakeholder participation: Meaningful stakeholder participation, including external stakeholders such as CSOs (especially affected communities and representatives of marginalized groups), should be mandatory in the context of human rights due diligence by AI providers and users, with sufficient resources dedicated supporting this. Also, the EU should add an explicit right of CSOs and external stakeholders to appeal decisions and consider them as having a legitimate interest.
There is a lot of unrealistic and distorted expectations about the AI capacity to solve problems which are fuelled by the industry. Civil society projects should be financially supported providing education and rising awareness about the AI to the people at the local level.
in global competition thrustworthy AI is Europes chance! This means an AI that is lead by European values and principles, free of biases and with human oversight in the lead. High risk AI must be examined in that sense before being placed on the market and in the case of serious doubts it must be banned. Autonomous weapons have to be banned.
Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
Algorith-based technology, including artificial intelligence (AI), has proven to exhibit biased outcomes due to the underlying data or social context. Human judgment is still needed to ensure AI supported decision making is fair. If needed, AI development and usage have to be subjected to much more rigorous public discourse and proper regulation in order to ensure its usage for the common good. The public interest should always be the main driver for public funding of AI. To ensure this, only open source software AI projects should be publicly funded.
Through the EU AI Regulation, the EU must ensure sufficient safeguards to protect citizens from any negative impact by AI technologies on their fundamental rights. Ensuring privacy, accessibility, and non-discrimination in the context of AI deployment remains to be addressed, as well as strong governance measures for enforcement of the Regulation. In the meantime, proactive regulatory actions should be taken to promote AI that will bring tangible benefits to citizens, for example promote the development of AI-based assistive technologies for persons with disabilities, or ensuring diversity of AI development teams.
Artificial Intelligence and any other digital innovation should tackle, or at least not reinforce, existing discrimination against vulnerable groups like migrants, people facing homelessness and social exclusion, or people with disability
1. Human decision-making and oversight is essential to an AI-powered future, including through audits and risk assessments across the various uses of AI. 2. Decisions on red lines for AI applications - such as facial recognition - should be made in a participatory and inclusive manner, for instance through citizen juries, to ensure that today’s inequalities aren’t replicated and deepened by AI.
• Maintain public supervision and monitoring over how AI is implemented, identifying discriminatory outcomes, preventing and mitigating discrimination risks • Address AI national strategies to include a human rights perspective • Ensure transparency over how AI is implemented, how data is collected, who is managing the AI solutions, how are the systems evaluated for discrimination • There must be a greater effort in diverse recruitment to ensure that the diversity of programmers working on AI can better mitigate the possibility of embedding underlining biases in the machines • The development of AI must be based on extensive research on biases, discrimination and human rights, to ensure that the machines are not trained to exacerbate inequalities in society • The AI systems must have incorporated in them functions to evaluate the respect of human rights in the way they are being used • Multiple stakeholders have to be included in designing AI solutions and deciding on how to use this in society. This implies relying on minority groups and on CSOs working in various domains, to ensure a broad view of how the AI could be ethically used.
1. Human rights impact assessment: A thorough, inclusive and transparent human rights impact assessment must be the starting point for all subsequent regulatory actions of any AI system. The findings of the HRIA should be integrated in the AI developer’s or user’s activities and products, as part of broader human rights due diligence. 2. Right to redress: An effective right to redress for affected groups should be added to all relevant EU AI legislation with meaningful support (including adequate resources) to stakeholders so that they can fully exercise this right. 3. Meaningful stakeholder participation: Meaningful stakeholder participation, including external stakeholders such as CSOs (especially affected communities and representatives of marginalized groups), should be mandatory in the context of human rights due diligence by AI providers and users, with sufficient resources dedicated supporting this. Also, the EU should add an explicit right of CSOs and external stakeholders to appeal decisions and consider them as having a legitimate interest.
There is a lot of unrealistic and distorted expectations about the AI capacity to solve problems which are fuelled by the industry. Civil society projects should be financially supported providing education and rising awareness about the AI to the people at the local level.
in global competition thrustworthy AI is Europes chance! This means an AI that is lead by European values and principles, free of biases and with human oversight in the lead. High risk AI must be examined in that sense before being placed on the market and in the case of serious doubts it must be banned. Autonomous weapons have to be banned.

Algorithms - Transparency of algorithms.

Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
Even in a tech- and algorithm-based world, responsibilities of decisions must always lie with and be controlled by humans. In order to ensure sovereignty of decisions and self-determination of individuals we need to have transparency on commercially and state-run technology, and especially on algorithms. We need public discussion on what happens once algorithms excel human comprehension both in ethical and legal terms as well as in terms of responsibilities of governments and private entities. Use of (Big) Data in health and care sectors has to be regulated and monitored. Online services in the health sector have to follow the highest security standards with offline options remaining in place. In health and care facilities, the protection of privacy has to be ensured by applying high data protection standards. Commercializing patient data has to be prohibited. The use of data must not lead to deterioration in insurance services. Limited capabilities to agree on the use of digital services must not be abused. Fundamental human rights cannot be limited by terms and conditions of business. IT and digital corporations have a responsibility to uphold human rights. This has to be an integral part of business practices as well as national, EU and international regulation. Similar to the corporate accountability index, IT-companies should have to ensure that their technologies, algorithms and software do not violate human rights. The supervision of these processes needs to be independent, transparent and publicly accessible. Governments need to start meeting their obligations to protect human rights and the environment against harmful activities of corporations. Thus, the introduction and regulation of corporations’ liabilities nationally and internationally is needed, including through the UN process towards a Binding Treaty on Business and Human Rights as well as the implementation of transparency initiatives along the value chain such as the Kimberly Process, conflict mineral laws or the UK Modern Slavery Act.
Accessibility of information is vital for ensuring transparency of algorithms for all citizens.
1) Algorithmic transparency, auditing, risk assessments (possibly by the FRA) and data access for public interest research and institutional oversight need to be embedded in the DSA and AI proposals. 2) Interoperability of algorithmic recommender systems provides an important entry point for strengthening media pluralism and free speech on online platforms, by unbundling content hosting from curation and allowing third party recommender systems to expand online public debate - rather than including a must-carry provision in the DSA that is susceptible to manipulation.
• There must be more regulation imposed at national level on transparency over how the data is being used.
1. All public and private users of automated decision-making should also be required to provide detailed information on when they use automated processes (whether algorithmic or otherwise) to moderate third-party content and how such mechanisms operate. This information should be made available in public registers. 2. Redress mechanisms for those affected by algorithm-based automated decision making should be a requirement.
Every algorithm (including self-learning systems) must be traceable and transparent.
Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
Even in a tech- and algorithm-based world, responsibilities of decisions must always lie with and be controlled by humans. In order to ensure sovereignty of decisions and self-determination of individuals we need to have transparency on commercially and state-run technology, and especially on algorithms. We need public discussion on what happens once algorithms excel human comprehension both in ethical and legal terms as well as in terms of responsibilities of governments and private entities. Use of (Big) Data in health and care sectors has to be regulated and monitored. Online services in the health sector have to follow the highest security standards with offline options remaining in place. In health and care facilities, the protection of privacy has to be ensured by applying high data protection standards. Commercializing patient data has to be prohibited. The use of data must not lead to deterioration in insurance services. Limited capabilities to agree on the use of digital services must not be abused. Fundamental human rights cannot be limited by terms and conditions of business. IT and digital corporations have a responsibility to uphold human rights. This has to be an integral part of business practices as well as national, EU and international regulation. Similar to the corporate accountability index, IT-companies should have to ensure that their technologies, algorithms and software do not violate human rights. The supervision of these processes needs to be independent, transparent and publicly accessible. Governments need to start meeting their obligations to protect human rights and the environment against harmful activities of corporations. Thus, the introduction and regulation of corporations’ liabilities nationally and internationally is needed, including through the UN process towards a Binding Treaty on Business and Human Rights as well as the implementation of transparency initiatives along the value chain such as the Kimberly Process, conflict mineral laws or the UK Modern Slavery Act.
Accessibility of information is vital for ensuring transparency of algorithms for all citizens.
1) Algorithmic transparency, auditing, risk assessments (possibly by the FRA) and data access for public interest research and institutional oversight need to be embedded in the DSA and AI proposals. 2) Interoperability of algorithmic recommender systems provides an important entry point for strengthening media pluralism and free speech on online platforms, by unbundling content hosting from curation and allowing third party recommender systems to expand online public debate - rather than including a must-carry provision in the DSA that is susceptible to manipulation.
• There must be more regulation imposed at national level on transparency over how the data is being used.
1. All public and private users of automated decision-making should also be required to provide detailed information on when they use automated processes (whether algorithmic or otherwise) to moderate third-party content and how such mechanisms operate. This information should be made available in public registers. 2. Redress mechanisms for those affected by algorithm-based automated decision making should be a requirement.
Every algorithm (including self-learning systems) must be traceable and transparent.

Online Disinformation – Protection against false, inaccurate, or misleading information used to intentionally cause public harm or make a profit.

Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
EU and Member State policymakers should refrain from including in regulation broad provisions on harmful but legal content like disinformation, which can be misused to curb free speech and target marginalised communities, human rights defenders and activists. A generalised ban on disinformation violates international law, and so instead, positive measures such as digital literacy education and financial support to independent quality media are necessary tools for advancing democratic debate.
1. Online content moderation should ultimately always require human review and intervention. The removal of illegal content online should take place only after a review process conducted by an independent, impartial and authoritative oversight body, on the basis of co-regulatory measures involving institutions, platforms and civil society stakeholders. Furthermore, government law enforcement agencies requesting the removal of online content should also be subject to the same procedural safeguards, in order to avoid the risk of potential abuse of powers and politically motivated censorship. 2. Social media platforms must take measures to prevent and act upon online smear campaigns spreading disinformation on CSOS, activists, human rights defenders and journalists.
A single European Regulatory Framework for disinformation should be created and national agencies (funded by all media) where journalists check the news should be established (recommendation from the SMARTeD project).
Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
EU and Member State policymakers should refrain from including in regulation broad provisions on harmful but legal content like disinformation, which can be misused to curb free speech and target marginalised communities, human rights defenders and activists. A generalised ban on disinformation violates international law, and so instead, positive measures such as digital literacy education and financial support to independent quality media are necessary tools for advancing democratic debate.
1. Online content moderation should ultimately always require human review and intervention. The removal of illegal content online should take place only after a review process conducted by an independent, impartial and authoritative oversight body, on the basis of co-regulatory measures involving institutions, platforms and civil society stakeholders. Furthermore, government law enforcement agencies requesting the removal of online content should also be subject to the same procedural safeguards, in order to avoid the risk of potential abuse of powers and politically motivated censorship. 2. Social media platforms must take measures to prevent and act upon online smear campaigns spreading disinformation on CSOS, activists, human rights defenders and journalists.
A single European Regulatory Framework for disinformation should be created and national agencies (funded by all media) where journalists check the news should be established (recommendation from the SMARTeD project).

Audiovisual Media Services – Regulation of online content and the role of online platforms in disseminating it as it has a direct impact on freedom of expression and access to information. Rules on audiovisual advertising, the promotion of European works, and providers’ obligations with regards to the protection of minors from potentially harmful content, among other measures.

Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
The EU Audiovisual Media Services Directive requires providers of audiovisual media services to ensure accessibility of audiovisual content for persons with disabilities. The Directive should be effectively transposed and implemented by Member States with legal quantitative and qualitative obligations for access services (e.g. subtitles for deaf and hard of hearing, sign interpretation, audio description, spoken subtitles, etc.).
Ethical Codes of Conduct should be put in place in audiovisual media services, so they don’t contribute to stigmatising discriminated groups like the Roma, homeless people, people with disability, LGTBIQ+, etc.
Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
The EU Audiovisual Media Services Directive requires providers of audiovisual media services to ensure accessibility of audiovisual content for persons with disabilities. The Directive should be effectively transposed and implemented by Member States with legal quantitative and qualitative obligations for access services (e.g. subtitles for deaf and hard of hearing, sign interpretation, audio description, spoken subtitles, etc.).
Ethical Codes of Conduct should be put in place in audiovisual media services, so they don’t contribute to stigmatising discriminated groups like the Roma, homeless people, people with disability, LGTBIQ+, etc.

Integrity of Elections - Rules to ensure greater transparency in the area of sponsored content in a political context (e.g. ‘political advertising'); protection of the integrity of elections and promotion democratic participation.

Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
Regulation should include 1) mandatory public ad libraries of all ads, including political ads, as well as real time transparency on ads (targeting criteria, identity of advertiser and funder, campaign spend, data source & GDPR basis); 2) tighter restrictions on political ads (from core political actors and those advertisers sponsored by them) such as targeting limitations and enhanced transparency criteria; 3) enhanced coordination between EMBs in EUMS; 4) regular review mechanism to ensure regulation stays up to date to new campaigning reality and to test ideas such as banning lookalike audiences or the use of special category data for political ad targeting.
Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
Regulation should include 1) mandatory public ad libraries of all ads, including political ads, as well as real time transparency on ads (targeting criteria, identity of advertiser and funder, campaign spend, data source & GDPR basis); 2) tighter restrictions on political ads (from core political actors and those advertisers sponsored by them) such as targeting limitations and enhanced transparency criteria; 3) enhanced coordination between EMBs in EUMS; 4) regular review mechanism to ensure regulation stays up to date to new campaigning reality and to test ideas such as banning lookalike audiences or the use of special category data for political ad targeting.

Online hate speech – Prevention of practices that denigrates people based on their race, ethnicity, gender, social status, sexual orientation, religion, age, physical or mental disability among others (infringing our rights to freedom of information and to non-discrimination).

Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
Hate speech poses grave dangers for our democracies, the protection of human rights and the rule of law. It covers many forms of expressions which spread, incite, promote or justify hatred, violence and discrimination against a person or group of persons for a variety of reasons. Proper contact points within governmental institutions need to be in place to support victims of hate speech. Hate speech has to be a matter of public prosecution, including proper investigations, and structural, financial as well as legal support for victims. Furthermore, we call for a stronger culture of digital courage actively countering hate speech and voicing solidarity with victims. Removal of hate speech and violation of terms of use in social media is currently overwhelmingly being handled by people in the Global South. Often this means having to review graphic and violent content for hours. There needs to be proper protection and support of content removers, including a global discussion on how to not leave them behind in a digital world.
Online platforms and governments should enhance their prevention and sanction mechanisms to combat online hate speech against marginalised groups.
We need greater EU support (financial & political) for organisations countering online hate speech & protecting survivors, particularly in the engagement with online platforms. Platform regulation should deal with illegal hate speech, in addition to self-regulation and strong oversight from CSOs and media organisations.
1. Interested third-parties such as civil society organisations or equality bodies contributing to tackle illegal activities online should be regularly consulted by online content providers to help them assess the human rights impact of their content curation and moderation and devise effective policies/community guidelines compliant with such rights. 2. Criteria for restriction/removal should be included in clear detailed policies or guidelines that should be adopted and regularly reviewed through a multi-stakeholder consultative process including civil society organisations; 3. Social media platforms must take measures to prevent and act upon online harassment and abuse against, activists, human rights defenders and journalists, especially women/gender non-binary persons, racialized persons, and members of other marginalized groups.
The EU should undertake robust legislative measures to tackle on-line hate speech. Current codes of conducts for digital platforms are not sufficient.
Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
Hate speech poses grave dangers for our democracies, the protection of human rights and the rule of law. It covers many forms of expressions which spread, incite, promote or justify hatred, violence and discrimination against a person or group of persons for a variety of reasons. Proper contact points within governmental institutions need to be in place to support victims of hate speech. Hate speech has to be a matter of public prosecution, including proper investigations, and structural, financial as well as legal support for victims. Furthermore, we call for a stronger culture of digital courage actively countering hate speech and voicing solidarity with victims. Removal of hate speech and violation of terms of use in social media is currently overwhelmingly being handled by people in the Global South. Often this means having to review graphic and violent content for hours. There needs to be proper protection and support of content removers, including a global discussion on how to not leave them behind in a digital world.
Online platforms and governments should enhance their prevention and sanction mechanisms to combat online hate speech against marginalised groups.
We need greater EU support (financial & political) for organisations countering online hate speech & protecting survivors, particularly in the engagement with online platforms. Platform regulation should deal with illegal hate speech, in addition to self-regulation and strong oversight from CSOs and media organisations.
1. Interested third-parties such as civil society organisations or equality bodies contributing to tackle illegal activities online should be regularly consulted by online content providers to help them assess the human rights impact of their content curation and moderation and devise effective policies/community guidelines compliant with such rights. 2. Criteria for restriction/removal should be included in clear detailed policies or guidelines that should be adopted and regularly reviewed through a multi-stakeholder consultative process including civil society organisations; 3. Social media platforms must take measures to prevent and act upon online harassment and abuse against, activists, human rights defenders and journalists, especially women/gender non-binary persons, racialized persons, and members of other marginalized groups.
The EU should undertake robust legislative measures to tackle on-line hate speech. Current codes of conducts for digital platforms are not sufficient.

Illegal content online - Measures to effectively tackle illegal content online, including issues such as incitement to terrorism, illegal hate speech, child sexual abuse material, infringements of Intellectual Property rights and consumer protection.

Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
Governments, companies, schools and other institutes have to ensure the protection of children’s data based on international ethical standards. IT-companies should refine tools, eg. password protections, age verification, filter or access granting, in order for parents to create an appropriate online environment for children.
EU Proposals for Digital Services and Digital Markets Acts must be improved to ensure accessibility of digital services and platforms for persons with disabilities. Lack of accessibility will negatively impact their right to non-discrimination and privacy. For example lack of accessible reporting and complaints mechanisms will mean that persons with disabilities subjected to online hate speech cannot protect themselves from cyberbullying.
Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
Governments, companies, schools and other institutes have to ensure the protection of children’s data based on international ethical standards. IT-companies should refine tools, eg. password protections, age verification, filter or access granting, in order for parents to create an appropriate online environment for children.
EU Proposals for Digital Services and Digital Markets Acts must be improved to ensure accessibility of digital services and platforms for persons with disabilities. Lack of accessibility will negatively impact their right to non-discrimination and privacy. For example lack of accessible reporting and complaints mechanisms will mean that persons with disabilities subjected to online hate speech cannot protect themselves from cyberbullying.

Other

Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
The use of autonomous technology for warfare has to be banned. The dehumanization of war victims by remote killing is leading to a further escalation of violence and a climate of fear in many regions and countries. The militarization of civic IT-technology and technology development by the military must not be financed by public money. Any government should rather internationally engage for peace and ban drones, KI and other digital technology in warfare.
Ban biometric mass surveillance and protect encryption as necessary steps to protect privacy and other human rights and by extension safeguard the foundation of our democracies. None of the benefits of breaking encryption or allowing biometric mass surveillance for law enforcement agencies outweigh this risk for people’s privacy.
• Prevention of surveillance capitalism in education: Prevent the private interests from impacting the educational process and especially ensure that surveillance capitalism is not a common occurrence in the aftermath of the COVID-19 pandemic.
Answer
(Google logo indicates automatic translation by Google Translate)
Original Answer
The use of autonomous technology for warfare has to be banned. The dehumanization of war victims by remote killing is leading to a further escalation of violence and a climate of fear in many regions and countries. The militarization of civic IT-technology and technology development by the military must not be financed by public money. Any government should rather internationally engage for peace and ban drones, KI and other digital technology in warfare.
Ban biometric mass surveillance and protect encryption as necessary steps to protect privacy and other human rights and by extension safeguard the foundation of our democracies. None of the benefits of breaking encryption or allowing biometric mass surveillance for law enforcement agencies outweigh this risk for people’s privacy.
• Prevention of surveillance capitalism in education: Prevent the private interests from impacting the educational process and especially ensure that surveillance capitalism is not a common occurrence in the aftermath of the COVID-19 pandemic.