Social media, unhinged
(Part 2)
In the first part of this column, I cited the urgent need for regulations on social media platforms and content creators, particularly in light of their insidious influence and ambient manipulation. This manipulation is subtly carried out by algorithms that shape our perceptions and decisions in ways we often fail to notice.
The agents of disinformation are so remarkably sophisticated that they are able to subtly yet seamlessly align their narratives and agendas with the interests of their principals, particularly in the most vulnerable open societies like the Philippines and even the United States.
I am not speaking off the cuff here, as many media scholars have addressed this issue. In 2024, professor Jose Mari Lanuza (UP Manila) and Jonathan Ong (University of Massachusetts), in their work “From Disinformation Campaigns to Influence Operations: New Campaign Tactics and Legacy Media Bypass in the Philippines,” argued that longitudinal disinformation campaigns have created hyperpartisan information systems. As a result, political campaigns have become increasingly difficult to hold accountable.
Dr. Ong also co-authored with Dr. Jason Vincent Cabañes (University of Leeds) a report entitled “Architects of Networked Disinformation: Behind the Scenes of Troll Accounts and Fake News Production in the Philippines” (2018). The report examines “the ecological vulnerabilities in the Philippines that enable politicians to recruit highly skilled, if corruptible, disinformation architects to collude with them without industry self-regulatory sanctions and mechanisms in place.”
In the past, with legacy media, journalists competed on newsworthiness and public trust, not on engagement-driven metrics. Today, with digital platforms, engagement-driven algorithms prioritize content that triggers emotion, outrage, and sensationalism. Echo chambers and filter bubbles reinforce biases, while automated programs (bots) and coordinated manipulation campaigns distort public opinion.
Indeed, it becomes clear that treating digital platforms with the same tolerance our Constitution traditionally extends to legacy media risks undermining the very democratic space that press freedom aims to protect. Equating digital platforms with legacy media in regulatory terms is utterly misguided.
If such unqualified treatment is allowed to continue, elections will be won—not by those who are best at exposing the truth and setting the most important agenda, but by those who are most sophisticated at employing disinformation, as seems to be happening now.
The question is: where do we draw the line in regulation? This is especially true in light of the need to be careful not to suppress free speech or grant too much discretion to the state to decide what is meaningful speech, what is merely information garbage, or what is information that, even if not imminently, becomes dangerous for democracy in a slow but sure manner.
I honestly don’t even have half the answers right now. However, if we look at societies that have successfully pushed back against disinformation, Taiwan stands out as a good example. Taiwan has resisted calls for tougher laws against disinformation. Instead, it has adopted a multifaceted approach—engaging the government, educational institutions, and civil society—or what is known as a 'whole-of-society response.'
However, this model may be too slow for us in the Philippines or may not work at all, given the challenges in our educational system and the corruptibility of our institutions.
I advocate for media literacy as a long-term solution. As an urgent measure, I also support proposals requiring digital platforms to disclose how their algorithms work—specifically, how they prioritize content. We should also examine the rewards or financial incentives that drive disinformation. Regulation can target these economic motivations to promote credible content and minimize harmful content.
Ultimately, this is a matter for Congress to study thoroughly, with extensive consultations across different sectors.
- Latest