Safety by Design: What EU Social Media Regulation Teaches Us About Better Platforms

EU regulation is beginning to focus on how social platforms are designed, not just what users post. This article explores how the Digital Services Act approaches safety, why reporting systems often fail users, and how platforms like Driftya use design choices such as visible reporting and automatic blocking to improve

en Niclas
ainterly illustration of a woman at a desktop computer reviewing report and safety options on screen, warm evening light with a European flag outside.

Over the past decade, social media regulation has mostly focused on content moderation.

What people post.
What should be removed.
How companies should handle illegal material.

But policymakers in Europe are increasingly looking at something deeper:
How the design of a platform itself affects safety.

The European Union’s Digital Services Act (DSA) reflects this shift. Instead of focusing only on what users say, it also examines how platforms structure interaction, reporting systems, and user protections.

That change matters more than it might seem. Because many of the problems people experience online are not only caused by content. They are caused by how systems are designed.

Why Platform Design Matters for Online Safety

Social media platforms are not neutral communication tools. Every design decision influences behavior.

Examples include:
• infinite scrolling feeds
• algorithmic amplification
• public follower counts
• visible engagement metrics
• friction in reporting harmful content

These features shape how people interact and how quickly content spreads. In some cases, they can also amplify conflict, harassment, and unhealthy attention loops. This is why regulators have begun examining “systemic risks” rather than just individual posts.

Under the EU Digital Services Act, large platforms must assess risks related to:
• harmful content spread
• manipulation of users
• negative impacts on mental health
• safety mechanisms and reporting systems

You can read more about this approach in the European Commission overview:
EU Digital Services Act


And a recent European Commission statement about platform safety enforcement:
European Commission Press Release

These policies reflect an emerging idea:
Platform architecture itself is part of the safety system.

The Problem With Many Reporting Systems

Most social media platforms technically provide reporting tools.

But the experience often looks like this:

  1. A user reports harmful content.
  2. The platform confirms the report.
  3. The content remains visible.
  4. The same user can still interact with the reporter.

From a safety perspective, this design creates unnecessary friction. The person reporting something may still encounter the same content or user again. In practice, the reporting system often protects the platform more than it protects the user.

Regulators have begun questioning whether reporting systems are too complicated or ineffective, particularly when they discourage users from protecting themselves. The EU Digital Services Act requires platforms to provide accessible and easy-to-use reporting mechanisms, emphasizing usability rather than complexity.

Reporting Should Protect the Reporter First

A safer design approach starts with a simple principle:
If someone reports something, the system should protect them immediately.

On Driftya, reporting a message triggers three immediate changes:
• the reported message is hidden for the reporter
• the report is stored in the system
• the reported user can be automatically blocked for the reporter

This means the user does not need to navigate additional settings or manually block the person afterward. The goal is not instant punishment. Instead, the goal is to give the reporter control over their own experience without friction.

This approach aligns with a moderation pattern sometimes called self-hide reporting, where reported content disappears for the reporter while the system retains the report for moderation review.

Visibility of Safety Tools Matters

Another important design decision is visibility. On many platforms, reporting tools are buried inside multiple menus. That creates hesitation when users need to act quickly.

Driftya takes the opposite approach.

The reporting option is clearly visible and easy to access, because safety tools should never be hidden behind layers of navigation. Making reporting visible does not increase conflict. It simply ensures that users always have a clear way to protect themselves.

Designing Systems That Reduce Pressure

Driftya was designed with a different goal than typical social platforms. Instead of maximizing engagement metrics, the system focuses on reducing unnecessary emotional pressure.

Some design decisions reflect that philosophy:
• messages arrive one at a time
• no public likes or follower counts
• communication moves forward instead of accumulating into feeds
• reporting immediately protects the reporter

These choices align with a broader idea behind the platform: digital systems should aim to reduce friction and avoid emotional escalation rather than amplify it.

Regulation and Responsible Design Are Converging

European regulation is slowly recognizing something product designers have known for years:
Software design shapes human behavior.

A visible report button changes how people respond to harmful content. A platform without public engagement metrics changes how people communicate. A system that hides reported content immediately reduces emotional stress.

As policymakers continue exploring digital safety rules, these types of design decisions may become just as important as moderation policies.

Building Healthier Online Systems

The future of online safety will likely involve a combination of:
• regulation
• platform responsibility
• better interaction design

But some improvements do not require regulation at all. They simply require developers to ask a different question:
Not “How do we maximize engagement?”

But instead:
“How do we make the system safer for the people using it?” Sometimes the answer is surprisingly simple. If someone reports something, the system should protect them first.