In Romanian law this problem sits at the intersection of the right to dignity and reputation, protected mainly by the Civil Code, and freedom of expression, guaranteed by the Constitution and by Article 10 of the European Convention on Human Rights. At European level, the European Court of Human Rights (ECtHR) has developed important guidance in cases such as Delfi AS v. Estonia, Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary and Sanchez v. France.
At the same time, the liability regime for online platforms is shaped by EU law on intermediary services – from the e-Commerce Directive, transposed in Romania by Law no. 365/2002 on electronic commerce, to the more recent Regulation (EU) 2022/2065 – Digital Services Act.
This article explains, in practical terms, when a page or site administrator can be liable for third-party comments, in what circumstances online platforms come into play, and how courts attempt to balance free speech with the protection of reputation.
1. Legal framework: reputation, dignity and freedom of expression
1.1. Protection of reputation under the Romanian Civil Code
The Romanian Civil Code explicitly protects dignity and reputation. Article 72 Civil Code provides that everyone has the right to respect for their dignity, and that any unlawful interference with a person’s honour and reputation is prohibited, subject to the limits in Article 75.
Neighbouring provisions deal with the right to one’s image (Article 73) and, in Article 75, the limits of freedom of expression, including when public-interest reporting or criticism in good faith may justify interferences with reputation.
Civil liability for defamatory statements is based on the general tort provisions in Articles 1349 and 1357 Civil Code, which state that everyone has a duty to respect the rules of conduct and to repair the damage caused to another person through an unlawful and culpable act.
1.2. Freedom of expression and the right to reputation in ECtHR case law
At Convention level, Article 10 ECHR protects freedom of expression, while Article 8 ECHR guarantees the right to respect for private life, which the ECtHR consistently interprets as including protection of a person’s reputation.
When States impose liability or sanctions for allegedly defamatory statements, the ECtHR examines whether the interference with freedom of expression is “necessary in a democratic society”. The Court applies a series of well-established criteria, including:
- whether the statements contribute to a debate of public interest;
- the status of the person concerned (public figure or private person);
- whether the statements are value judgments or factual allegations capable of proof;
- whether there was a sufficient factual basis for allegations of fact;
- the tone of the statements (gratuitous personal attack, hate speech, incitement);
- the severity of the sanction imposed.
These criteria are reflected in Romanian case law on defamation, where courts often refer to Articles 72–75 Civil Code in conjunction with Article 10 ECHR and ECtHR judgments when assessing Facebook posts, comments on news sites or online campaigns that allegedly harm reputation.
1.3. From criminal insult and defamation to civil liability
Historically, insult and defamation were criminal offences under the Romanian Criminal Code. Following Constitutional Court decisions and legislative changes, these offences were removed, and the protection of reputation shifted primarily to civil law: tort actions, injunctions preventing or stopping dissemination, and claims for moral damages. Legal guides explaining this evolution stress that, in the context of social media, victims now usually rely on civil remedies, sometimes in parallel with criminal complaints for threats, incitement to hatred or violations of private life where applicable.
2. Who is involved when defamatory comments appear online?
In practice, three categories of actors are involved:
- The author of the comment – the person (natural or legal) who writes and publishes the allegedly defamatory statement.
- The page / site / group administrator – the person who controls the specific space where the comment appears (a company Facebook page, a news website, a forum, a local group etc.).
- The platform – the provider of the underlying service (Meta/Facebook, YouTube/Google, a forum hosting service, a commenting widget provider etc.).
Liability can be shared or distributed between these actors depending on their role, level of control over the content, moderation systems and, crucially, their reaction once they become aware of unlawful comments.
3. Liability of page and site administrators under Romanian law
3.1. General principle: the administrator is not “invisible”
Romanian courts have made it clear that a site or page administrator cannot hide indefinitely behind the fact that they merely “host” user-generated content. In a landmark decision of the High Court of Cassation and Justice (civil section) regarding a news website, the court held that primary responsibility for defamatory statements published on a site open to comments lies with the site’s administrator, as the person who makes public communication possible and controls the publication tools. Legal commentaries on this decision emphasise that the administrator has a positive duty to exercise minimum control and to react diligently once informed of clearly unlawful content.
In that case, although the offensive statements were made by readers, the court considered that the site’s editorial team and administrator had allowed the comments to be published and remain online without any meaningful intervention, despite their manifestly defamatory nature. The court applied the general tort rules and concluded that the administrator’s passivity constituted an unlawful and culpable omission.
3.2. Defamatory comments on Facebook: co-responsibility of the page admin
Recent cases illustrate the same logic for Facebook pages. Courts have held page administrators liable where they allowed defamatory or insulting comments to remain visible, even after being notified by the victim. In one publicly discussed case, a court awarded moral damages and ordered the removal of a defamatory comment from a Facebook page, reasoning that the page admin had failed to exercise minimum moderation and had thereby contributed to the dissemination of the harm.
Other judgments, reported in legal and business media, show courts awarding tens of thousands of lei as moral damages for defamatory posts and comments on Facebook. The fact that the statements were made on social media, not in traditional media, did not diminish the seriousness of the interference with the victim’s dignity and reputation; on the contrary, courts pointed to the potential viral reach and persistence of online content as aggravating factors.
3.3. Positive obligations: moderation and reaction to notice
From the growing body of national case law, several core principles emerge regarding page and site administrators:
- Courts do not require general, permanent censorship, but they expect administrators to react promptly when they become aware of comments that are clearly defamatory, insulting, racist, inciting to hatred or violence.
- A standard disclaimer such as “opinions belong to users” is not sufficient if the administrator leaves problematic content online and thereby facilitates its dissemination.
- Courts examine whether the administrator provided at least a basic reporting mechanism and whether it was actually used (quick removal, blocking repeat offenders, post-by-post moderation in high-risk contexts).
- Administrators are expected to adjust their level of moderation to the nature of the page: a professional news site, a campaign page or a large commercial page carries greater responsibility than a small, private account with limited reach.
In several decisions concerning defamation on social networks, the courts underlined that the person controlling the communication space has a position of control over the dissemination of comments, and thus a responsibility not to perpetuate manifestly defamatory content once they become aware of it.
3.4. Identifying administrators and authors of comments
A key practical difficulty in defamation cases is anonymity or pseudonymity. Romanian courts have, in some situations, ordered Meta/Facebook to provide information on the identity of page administrators or users responsible for severe personal attacks. In one widely reported case, a court obliged Facebook’s European entity to disclose the identity of administrators of a page targeting two individuals with defamatory content, so that they could be sued in civil proceedings.
In more recent proceedings, courts have also ordered platforms to reveal the data of anonymous page admins responsible for sustained smear campaigns against local public figures. These cases show that, under general civil procedure and tort rules, courts may compel platforms to cooperate in identifying those responsible for unlawful content, as long as data protection safeguards are observed.
4. Liability of online platforms: from Law 365/2002 to the Digital Services Act
4.1. Safe harbour regime under Law no. 365/2002 on electronic commerce
At national level, the liability of providers of information society services is governed by Law no. 365/2002 on electronic commerce, which transposes Directive 2000/31/EC on electronic commerce. Articles 11–15 of the law introduce so-called “safe harbours” – situations where intermediary providers (mere conduit, caching, hosting) are exempt from liability for information provided by recipients of their services, provided that:
- they do not have actual knowledge of the illegal nature of the information or activity of the recipient; and
- once they obtain such knowledge (for example via a notice), they act expeditiously to remove or disable access to the information.
Specialist commentaries note that Romania largely followed the e-Commerce Directive’s model, including the principle that intermediaries cannot be subject to a general obligation to monitor the information they transmit or store. The focus is on liability once an intermediary is aware of specific illegal content and fails to take appropriate action.
4.2. The Digital Services Act (DSA): a new European framework
In 2022, the EU adopted the Digital Services Act – Regulation (EU) 2022/2065, applicable from 17 February 2024 for most obligations. The DSA does not abolish safe harbours; rather, it refines them and introduces additional due diligence obligations for intermediary services, especially online platforms and very large online platforms.
According to official summaries, the DSA seeks to create a safer digital environment, defining clear responsibilities for platforms and social networks and establishing robust mechanisms for reporting and removing illegal content (including defamatory hate speech or serious violations of personality rights). It preserves the core immunity rules from the e-Commerce Directive but adds obligations such as:
- user-friendly notice and action mechanisms for illegal content;
- clear rules on platform terms and moderation policies;
- transparency reporting on content moderation;
- enhanced risk assessments and mitigation measures for very large platforms.
Legal analyses of the DSA underline that platforms are still protected from liability for user content as long as they act as neutral intermediaries and comply with notice-and-action obligations. However, repeated or systematic failure to respond to notices about illegal content may expose them to regulatory sanctions and, in some circumstances, to civil liability.
4.3. Platform vs. page administrator in practice
From the perspective of a victim of defamation in a comment:
- The comment’s author is the primary direct tortfeasor – the person who published the defamatory statement.
- The page or group administrator can be jointly liable if they tolerated or encouraged the comment, failed to act despite notice, or actively contributed to its visibility.
- The platform (e.g. Facebook) benefits from a conditional liability regime: it must provide effective reporting tools and act promptly when notified of clearly illegal content; its direct civil liability towards the victim is generally more difficult to establish and is filtered through the special intermediary liability rules, but its regulatory exposure under the DSA is significant if it fails to comply with its obligations.
Cases where courts ordered platforms to disclose the identity of administrators or commenters show that platforms are increasingly involved in the protection of reputation, even if they are not typically the first target of civil damages claims by victims.
5. ECtHR case law on online comments and intermediary liability
5.1. Delfi AS v. Estonia – a news portal liable for readers’ comments
In Delfi AS v. Estonia (Grand Chamber, 2015), the ECtHR examined a news portal that allowed anonymous comments under articles. Under an article about a ferry company, a series of highly offensive, threatening comments were posted against the company’s beneficial owner.
Although Delfi had automatic filters and a notice-and-takedown system, the comments remained online for several weeks. The Estonian courts held the portal liable for the comments, and the ECtHR agreed that this did not breach Article 10. The Court stressed several factors: Delfi was a professionally managed news portal operated for profit; it invited readers to comment to increase traffic and advertising revenue; it exercised a degree of control over comments through filtering and deletion tools; and the comments at issue were clearly unlawful hate speech and incitement.
Commentaries on Delfi note that the Court viewed the portal not as a neutral, purely technical intermediary, but as an active publisher that had to exercise enhanced diligence in relation to obviously unlawful comments posted under its articles.
5.2. MTE and Index.hu v. Hungary – a more flexible approach
In Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary (2016), the Court recalibrated its approach. The case involved an Internet news portal and a self-regulatory body that had been held liable in civil proceedings for offensive comments, but the comments were not as extreme as those in Delfi.
The ECtHR found a violation of Article 10, emphasising that the Hungarian courts had not sufficiently distinguished between statements of fact and value judgments, and had not adequately considered the measures taken by the applicants (clear disclaimers, notice-based removal). The Court considered the liability regime imposed by the national courts to be disproportionate and potentially chilling for online discussion.
Academic analysis often refers to this judgment as “Delfi revisited”, noting that the Court does not impose automatic liability on intermediaries but insists on a context-sensitive assessment of factors such as the type of portal, nature of comments, moderation tools and behaviour after notice.
5.3. Sanchez v. France – politician punished for followers’ comments
In Sanchez v. France (Grand Chamber, 2023), the applicant was a French politician and election candidate who administered a Facebook account used for campaign purposes. Under one of his posts, followers left comments that were strongly anti-Muslim and incited hatred and violence. The comments remained online for several weeks; the applicant did not remove them.
French courts convicted him for incitement to hatred, not for writing the comments himself, but for failing to take prompt action to delete them in his capacity as account holder and political candidate. The ECtHR held that this did not violate Article 10, noting in particular that he was a professional politician, that the comments amounted to hate speech, that he had control over the account and had not acted promptly, and that the criminal penalty imposed was relatively moderate.
Commentators have pointed out that Sanchez extends the concept of responsibility for user comments, especially for politicians and public figures who use social media pages as campaign tools. The judgment reinforces the message that those who operate influential pages cannot turn a blind eye to unlawful hate speech posted by their supporters.
5.4. Lessons for page administrators in Romania
When combined with Romanian civil law and case law, the ECtHR line of cases suggests several key lessons for page and group administrators:
- Administrators of professional or high-impact pages (companies, news outlets, institutions, politicians) bear greater responsibility than casual users, especially where the page has commercial or electoral purposes.
- Obviously unlawful comments – such as statements containing serious unfounded accusations, hate speech, or incitement to violence – cannot be left online indefinitely under the mere banner of “free speech”.
- Prompt action after notice is crucial: keeping defamatory or hate speech comments online after becoming aware of them will significantly increase the risk of being found jointly liable.
- Courts will look at the overall moderation culture of the page: whether the administrator encourages or discourages abusive speech, whether rules exist and are enforced, and how the administrator reacts in practice.
6. Practical scenarios and risks for administrators and platforms
6.1. A local business Facebook page
Imagine a Facebook page for a local restaurant. Under a promotional post, third-party users start posting comments not about the restaurant, but about a competing business: “X launders money”, “X scams customers”, “X is a criminal”, without any factual basis. The page admin is tagged in several replies and receives a private message from the competitor asking for help, but does nothing – the comments remain online and continue to be shared.
In this scenario, the competitor could sue both the individual commenters and the page admin, arguing that the latter facilitated and perpetuated the dissemination of defamatory allegations. A court would assess whether the admin took minimum reasonable steps (removal of obviously unlawful comments, discouraging abusive discussions, reacting to notice). Lack of any reaction could lead to joint civil liability and an obligation to pay moral damages.
6.2. News site with an open comment section
A local news website hosts an open, anonymous comment section under its articles. There are “comment rules” that nominally ban insults, but there is no active moderation, only a hidden “report” button. Under articles about a local entrepreneur, comments accumulate over months, containing repeated allegations of fraud and dishonesty. The entrepreneur sends multiple takedown requests through the contact form and by email, but the site administrator neither responds nor removes the comments.
In light of Delfi, MTE/Index.hu and Romanian tort law, a court would likely find that the site has moved beyond a purely passive intermediary role. The combination of an open unmoderated section, a commercial news operation, and persistent inaction after notice would strongly support a finding of civil liability for the site operator, in addition to the individual commenters. The damages could reflect the seriousness and duration of the attacks and the size of the audience.
6.3. Facebook group and the responsibility of the “admin”
In Facebook groups, admin and moderator roles are often treated informally, but they still entail technical control over content. If, in a public or closed group, users conduct a sustained smear campaign against a teacher, doctor or local entrepreneur – posting insults and false allegations – and the group admin continues to approve posts, pins them or explicitly encourages the discussion, a court may view the admin as an active participant in the defamation.
Published Romanian cases so far focus primarily on authors of posts, but the European trend suggests that, over time, group admins – especially in professional, commercial or political groups – may find themselves increasingly targeted by defamation claims when they clearly tolerate or promote unlawful content.
6.4. Large platforms (Facebook, YouTube) and notices about defamatory content
In practice, victims of defamation frequently use the internal tools of platforms – reporting tools for hate speech, harassment or harmful content. Under the DSA, these mechanisms must be easy to use, accessible in a clear location, and supported by reasonable reaction times and transparent decisions.
If a platform systematically ignores well-founded notices or fails to comply with its due diligence obligations, national regulators can impose substantial administrative fines. However, in the relationship between the victim and the platform, the conditional liability regime still applies: civil actions for damages typically focus first on the author of the comment and the page or site administrator. Platforms primarily face regulatory exposure rather than direct civil liability, although this distinction may gradually evolve under DSA enforcement practice.
7. Practical recommendations for administrators and victims
7.1. For page, site and group administrators
Administrators of pages (businesses, professionals, politicians, influencers) can significantly reduce their legal risk by adopting a few concrete practices:
- Clear comment rules – publish a brief policy prohibiting insults, defamation, hate speech and threats, and stating that such comments will be removed.
- Visible reporting channels – encourage users to report problematic comments and monitor the inbox and notification channels.
- Prompt reaction – when notified about an obviously defamatory or hateful comment, review it quickly and, if clearly unlawful, remove it and, where appropriate, block the user responsible.
- Enhanced moderation in sensitive areas – for political topics, ethnic or religious issues and other polarising subjects, consider pre-moderation, keyword filters or temporarily limiting comments.
- Public distancing – in borderline cases, consider posting a clarification that the page does not endorse certain allegations and has no evidence to confirm them, while reviewing the situation.
- Internal policies and training – businesses should train social media staff on legal risks and procedures for handling complaints, including who decides on removals and how quickly.
Crucially, a generic disclaimer alone will not prevent liability. Courts and the ECtHR look at the real behaviour of the administrator: whether they encourage engagement while ignoring complaints, or whether they in good faith try to keep discussions lawful and respectful.
7.2. For victims of defamatory comments
Individuals and companies affected by defamatory comments can follow a gradual strategy:
- Collect evidence – take screenshots (including date and time), save the URL of the post and, if possible, the permalink of the comment; capture the context such as reactions, shares or follow-up comments.
- Contact the administrator – send a clear message explaining which comment is defamatory, why, and what you request (removal, apology, clarification).
- Use the platform’s reporting tools – file an official report under the appropriate category (harassment, hate speech etc.) and keep a record of the report and any responses received.
- Formal legal notice – if nothing changes, instruct a lawyer to send a formal cease-and-desist letter to both the comment’s author (if identifiable) and the page admin, demanding removal and compensation.
- Civil action – as a last resort, file a civil lawsuit for defamation based on Articles 72 and following, and Articles 1349 and 1357 Civil Code, asking the court to find that your reputation has been unlawfully harmed and to award moral damages and other appropriate measures (for example, deletion of posts, publication of the judgment).
Recent Romanian decisions show that courts do award moral damages for online defamation, from modest amounts to significant sums, depending on the gravity and duration of the attack, the audience size and the defendants’ conduct (persistence vs. prompt corrective action).
7.3. Interaction with data protection and criminal law
Where defamatory comments also reveal personal data (addresses, health data, family details) or sensitive personal information, GDPR issues arise. The Romanian Data Protection Authority can intervene and order deletion of unlawfully published data and impose administrative fines, alongside civil court remedies.
In addition, where comments go beyond defamation into threats, extortion or incitement to hatred and violence, provisions of the Criminal Code may apply. In such cases, a criminal complaint can be combined with or followed by a civil claim for damages.
8. Conclusions
The liability of page administrators and online platforms for defamatory comments is currently shaped by three overlapping layers:
- national law (Civil Code, Law 365/2002, general tort rules and emerging case law on online defamation);
- ECtHR case law (especially Delfi, MTE/Index.hu and Sanchez), which articulates the necessary balance between free speech and reputation in the online environment;
- modern EU regulation on intermediaries (the Digital Services Act), which refines safe harbours and imposes due diligence obligations.
The central message for page and group administrators is that once they play an active role in disseminating content – by encouraging comments, maintaining them visible and ignoring serious complaints – they can no longer rely on the idea of purely technical neutrality. Freedom of expression remains a fundamental right, but it does not legitimise defamation or hate campaigns against individuals.
At the same time, victims of online defamation have increasingly effective tools at their disposal – from takedown procedures and identification of anonymous authors, to meaningful monetary compensation and corrective measures ordered by courts. The key ingredients are good documentation, prompt reaction, and a combined use of platform tools, national law and European instruments.
FAQ – Frequently Asked Questions on Liability for Defamatory Comments
1. As a Facebook page admin, am I liable for comments posted by other users?
Yes, in some circumstances. Under Romanian civil law and in light of ECtHR case law, a page admin can be held civilly liable if they allow clearly defamatory comments to remain online, fail to react after being notified, or encourage such comments. Courts look at the admin’s actual behaviour – whether they took reasonable steps to remove or discourage unlawful content – rather than accepting a purely “technical” role.
2. Is a disclaimer like “opinions belong to users, not the page” enough to avoid liability?
No. A disclaimer may have some informative value, but it does not automatically shield the admin from liability if, in reality, they maintain or even stimulate obviously defamatory or hateful comments. Both Romanian judgments and ECtHR decisions (such as Delfi and Sanchez) stress that liability depends on context: level of control, moderation practices and reaction to notice.
3. How quickly do I need to remove a clearly defamatory comment after being notified?
The law does not set a precise number of hours, but it requires a “prompt” reaction. In practice, the more serious the comment (hate speech, threats, grave factual accusations without evidence), the quicker the removal should be – ideally within the same day or even within hours, especially where the page has a large audience. Unjustified delay increases the risk of joint liability.
4. Can platforms like Facebook or YouTube also be held liable for defamatory comments?
Platforms benefit from a conditional liability regime under Law 365/2002 and the Digital Services Act. They are not automatically responsible for all user-generated content, but they must provide effective reporting tools and act expeditiously when notified of illegal content. They can be sanctioned by regulators if they systematically ignore obligations, and in exceptional cases they may also face civil claims, although victims usually sue authors and page admins first.
5. What can I do if I am defamed in a comment on a page or in a group?
You should promptly collect evidence (screenshots, URLs), ask the admin to remove the comment, report it using the platform’s tools, and, if nothing changes, send a formal legal notice and consider a civil lawsuit for defamation. In court you can seek a finding that your reputation has been unlawfully harmed, the removal of the content, and moral damages proportional to the seriousness of the attack.
6. Can I still face criminal charges for insult or defamation in online comments?
Insult and defamation as stand-alone criminal offences have been removed from the Criminal Code, and protection of reputation is now mainly civil. However, certain forms of speech – threats, blackmail, incitement to hatred or violence, breaches of privacy – remain criminally punishable. Posting such content in comments can lead to criminal investigations in addition to civil liability.
7. As an admin, am I legally obliged to monitor all comments proactively?
EU law (including the DSA) prohibits imposing a general monitoring obligation on intermediaries. That said, courts expect a certain level of vigilance, especially on high-risk pages (political, large media, sensitive topics), and a prompt response when the admin becomes aware of unlawful comments. You are not required to pre-filter everything, but you must act when a specific problem becomes apparent.
8. What evidentiary challenges arise in defamation cases based on comments?
The main challenges are identifying the author or admin (in anonymous or pseudonymous contexts) and proving the exact content of the comments, which may be edited or deleted later. This is why timely screenshots, archived copies and, in serious cases, notarial recordings can be crucial. Courts may also order platforms to disclose certain identification data, within the limits of data protection rules, to enable victims to bring claims.
