Ethics around AI (1/3)


Within digital content creation, artificial intelligence (AI) has become an important tool for publishers. AI provides opportunities to streamline processes, increase productivity and create new forms of content. But these technological advances also bring new ethical challenges. In this blog series, we dive deeper into the ethical considerations around transparency, authenticity and accountability in the context of AI-generated content. How do we ensure that we not only innovate, but do so responsibly?

Transparency and Authenticity

One of the core principles in using AI ethically in content creation is transparency. It is essential that both creators and consumers understand when and how AI is used to generate content. This openness is crucial to maintaining trust between publishers and their audiences. But transparency is not just about revealing the use of AI; it is also about ensuring the authenticity of the content. In an era when AI can produce texts, images and even videos that are barely distinguishable from human work, maintaining a standard of authenticity is essential. This means developing clear guidelines on how we use AI without undermining the authenticity and integrity of our content.

EU directives

In the European Union, the deployment of artificial intelligence (AI) is regulated by the AI Act, the world’s first-ever comprehensive AI law. This legislation shows how serious the EU is about ensuring safe and transparent AI use that respects our democratic values. The AI Act was recently approved by the relevant committees of the European Parliament and now awaits final votes in parliament and formal adoption(euronews). This should take place within a few months.

AI systems with unacceptable risk, such as those that apply cognitive behavioral manipulation or assign social scores, will be banned. High-risk AI systems are subject to strict regulations, including transparency requirements and registration in an EU database.

Generative AI systems, such as ChatGPT, should make it clear when content has been generated by AI. In addition, high-impact general AI models, which may pose a systemic risk, must undergo thorough evaluations and any serious incidents must be reported to the European Commission.

AI systems with limited risk must meet minimum transparency requirements that allow users to make informed decisions, especially for AI systems that generate or manipulate image, audio or video content, such as deepfakes.

These regulatory developments highlight the need for all parties, including authors and publishers, to be transparent about the use of AI in the creation of works, and ensure that control over these systems is maintained to prevent harmful outcomes.


The question of accountability in AI-generated content is complex. Who is responsible for the accuracy, reliability and ethics of AI-produced works? This responsibility lies with both the developers of AI technology and its users. To ensure ethical content, we must establish guidelines that both harness AI’s creative potential and mitigate its risks. This includes implementing controls to ensure that AI tools reflect and adhere to the organization’s ethical standards.


A concrete example highlighting the need for transparency and authenticity is found in the world of book writing. There is a growing debate about whether writers should reveal whether their works were written with the help of AI. On the one hand, AI writing assistance allows writers to work more efficiently, overcome creative blocks and even explore new styles and genres. However, the question remains whether the use of AI affects the “authenticity” of the literary work. Should books significantly generated by AI be marked as such? This debate goes to the heart of our values around authorship and originality. It also provides a practical example of how transparency is not only an ethical issue, but also a factor influencing the public’s perception of value and authenticity. By being open about the use of AI, writers and publishers can maintain a relationship of trust with their readers while innovating within the literary field.


As we explore the possibilities of AI in content creation, we must be aware of the ethical implications. Transparency, authenticity and accountability are not just abstract concepts, but practical principles that guide how we use technology for creative purposes. By embracing these ethical guidelines, we can shape a future of content creation that is both innovative and integrity-driven. Let’s engage in a dialogue together on how to use this technology responsibly, in order to maintain the value of our work and the trust of our audience. So hereby, yes this blog post was also written in collaboration with ChatGPT.

Blog series on ethics around AI

This is the beginning of our exploration of the world of AI in the creative industries. In this first part, we focused on the ethical considerations and fundamentals of intellectual property. Stay with us for Blog 2, where we dive deeper into how to put these principles into practice within media and publishing, and explore how AI can be used responsibly to drive innovation and creativity.



New on board at Qonqord

In September Patrick started at the service desk of Qonqord My name is Patrick Bruggink, 42 ​​years old and married …

Lees meer..

What is Content Orchestration?

What is Content Orchestration and how does it work?  If you’ve heard the term ‘content orchestration’ used in relation to …

Lees meer..

Why brands and publishers are becoming more and more alike

Brands and publishers are increasingly similar in their marketing strategies. This is because both brands and publishers are realizing that …

Lees meer..

Afspraak maken

Make an appointment