Featured research

Latest research articles

Auditing large language models

Large language models (LLMs) have revolutionized artificial intelligence research, but ethical challenges persist. A new approach to auditing LLMs aims to address these concerns and promote responsible AI use.

US Algorithmic Accountability vs. The EU Artificial Intelligence Act:

A Comparative Analysis of the US Algorithmic Accountability Act and the European

The European legislation on AI

Philosopher Luciano Floridi analyzes European AI legislation, its philosophical approach, challenges in legal implementation, and the impact of the pandemic, offering insights for policymakers and researchers.

Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity

The legal and regulatory challenge posed by Generative AI and Large Language Models (LLMs) in the European Union

A unified framework of five principles for AI in society

A comprehensive ethical AI framework emphasizes principles of benevolence, non-maleficence, autonomy, justice, and explainability, ensuring positive social outcomes from AI technology.

Taking AI risks seriously: a new assessment model for the AI Act

New risk model integrates EU AI Act with IPCC's risk approach, enabling estimation of AI risk magnitude by considering interaction between risk determinants, drivers, and types.

The ethics of algorithms: Mapping the debate

Algorithmic Decision-Making: Study explores human operations delegated to algorithms, highlighting ethical implications and the need for responsible governance.
Join our newsletter to stay up to date on features and releases.
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
© 2023 oxethica GmbH. All rights reserved.