Policy Brief  

Bridging-Based Ranking

This report explores the potential of bridging and discusses some of the most common objections, addressing questions around legitimacy and practicality.

By:

  • Aviv Ovadya

Long lines of light of varying lengths appear on a wall
Access the Report

The Problem

Algorithmic ranking and recommendation systems determine what kinds of behaviors are rewarded by digital platforms like Facebook, YouTube, and TikTok by choosing what content to show to users. Because these platforms dominate our attention economy, and because attention can be transformed into money and power, platform recommendations therefore provide a reward structure for society at large. 

Platforms currently reward divisive behavior with attention due to the interactions between engagement-based ranking and human psychology. This helps determine the kinds of politicians, journalists, entertainers, and others who can succeed in their respective social arenas, resulting in significant impacts on the quality of our decision-making, our capacity to cooperate, the likelihood of violent conflict, and the robustness of democracy.

The opportunity

We can potentially mitigate this ‘centrifugal’ force toward division by deploying ranking systems that do the opposite—that provide a countervailing ‘centripetal’ or bridging force.

Bridging-based ranking rewards behavior that bridges divides. For example, imagine if Facebook rewarded content that led to positive interactions across diverse audiences, including around divisive topics. How might that change what people, posts, pages, and groups are successful?

This report explores the potential of bridging and discusses some of the most common objections, addressing questions around legitimacy and practicality. It contrasts bridging with some of the most discussed approaches for reforming ranking: reverse-chronological feeds, ‘middleware’, and ‘choose your own ranking system’. (Unfortunately, without introducing bridging, all of these proposed reforms still reward those who seek to divide.) Finally, this report explores early examples where bridging systems are already being tried with some success.

Summary of Next Steps

We can and should rapidly build capacity to develop, evaluate, and deploy bridging-based ranking systems.

  •  Governments, platforms, funders, and researchers must direct resources towards this goal.
  • We specifically call for platforms to measure the extent to which their products divide people (bridging metrics), and to include both bridging metrics and bridging-based ranking into their product roadmaps and quarterly goals.
  • To address legitimacy and platform power concerns, we suggest putting the ultimate question of ‘what recommendation systems should reward’ to the impacted populations through platform democracy. We further argue that the default should not be divisive engagement-based ranking, or chronological feeds (that reward those who post the most), but bridging-based ranking (as it actively mitigates divisive tendencies).
  • We must involve interdisciplinary scholars and practitioners to ensure that what we create is truly beneficial for the public and democracy.

Bridging-based ranking alone is not a silver bullet—we need other reforms to address the many challenges of platform-enabled connectivity. But bridging would help address one of the most significant risks—that of being pushed past a “division threshold” beyond which democracy can no longer function.

More from this Program

Weaponized AI: A New Era of Threats and How We Can Counter It

Commentary

Weaponized AI: A New Era of Threats and How We Can Counter It

Allen Lab for Democracy Renovation Fellow Dr. Shlomit Wagman lays out a framework to address the threats artificial intelligence poses to global security and democratic institutions.

Why US States Are the Best Labs for Public AI

Additional Resource

Why US States Are the Best Labs for Public AI

In a recent piece for Tech Policy Press, Allen Lab Senior Fellow Alex Pascal and Nathan Sanders outline how US states are well-positioned to lead the development of Public AI. State governments can act as “laboratories of twenty-first century democracy” to experiment with AI applications that directly benefit citizens.

Understanding DOGE and Your Data
DOGE

Additional Resource

Understanding DOGE and Your Data

Over the past several weeks, the Department of Government Efficiency (DOGE) within the Trump Administration has been embedding staff in a range of United States federal agencies. These staff have gained access to data maintained by the federal government. This guide explains what is in the data, what DOGE is doing with it, and why it matters to all Americans.

More on this Issue

AI on the Ballot: How Artificial Intelligence Is Already Changing Politics
An American flag in the background with a robot hand holding a ballot box

Commentary

AI on the Ballot: How Artificial Intelligence Is Already Changing Politics

At a recent Ash Center panel, experts and AI developers discuss how AI’s influence on politics has evolved over the years. They examine the new tools available to politicians, the role of humans in AI’s relationship with governance, and the values guiding the design of these technologies.

Weaponized AI: A New Era of Threats and How We Can Counter It

Commentary

Weaponized AI: A New Era of Threats and How We Can Counter It

Allen Lab for Democracy Renovation Fellow Dr. Shlomit Wagman lays out a framework to address the threats artificial intelligence poses to global security and democratic institutions.

Why US States Are the Best Labs for Public AI

Additional Resource

Why US States Are the Best Labs for Public AI

In a recent piece for Tech Policy Press, Allen Lab Senior Fellow Alex Pascal and Nathan Sanders outline how US states are well-positioned to lead the development of Public AI. State governments can act as “laboratories of twenty-first century democracy” to experiment with AI applications that directly benefit citizens.