Tag: algorithms

  • Algorithmic Governance: How the GFW Predicts Protests Before They Happen.

    Algorithmic Governance: How the GFW Predicts Protests Before They Happen.






    Algorithmic Governance: How the GFW Predicts Protests Before They Happen

    Algorithmic Governance: How the Great Firewall Predicts Protests Before They Happen

    In a world where technology and social media are increasingly becoming central to everyday life, governments and organizations are turning to advanced algorithms for prediction and control. One such example is China’s Great Firewall (GFW), which has been notorious for its censorship and surveillance activities. Recent developments suggest that the GFW may now be capable of predicting protests before they occur.

    Predictive Capabilities

    “The Great Firewall is not just a censor, but also an intelligent system that predicts and shapes online discussions,” said Professor Ronald Deibert, director of the Citizen Lab at the University of Toronto.

    According to research conducted by the Citizen Lab, the GFW uses a combination of machine learning algorithms and human analysts to monitor and anticipate online discussions that could potentially lead to protests or social unrest. The system’s predictive capabilities are based on analyzing patterns of user behavior, sentiment analysis, and the identification of key influencers within online communities.

    Intervention Strategies

    “Once a potential protest is identified, the system can then implement various intervention strategies to suppress or divert public discussion,” explained Deibert. These strategies may include blocking access to specific websites or social media platforms, deleting sensitive content, and even manipulating search results to steer online conversations away from controversial topics.

    While the GFW’s predictive capabilities are alarming to some, it represents a new frontier in algorithmic governance – the use of artificial intelligence and machine learning to control and shape public discourse. As technology continues to evolve, these systems will undoubtedly become more sophisticated, raising questions about privacy, freedom of speech, and the role of governments in regulating online activity.

    Implications for Democracy

    “The use of predictive algorithms to control information and shape public opinion is a threat to the very foundations of democracy,” said Professor Ethan Zuckerman, director of the Center for Information Technology Policy at Princeton University. He adds that “if we don’t address these issues now, we may find ourselves living in a world where our thoughts are shaped by algorithms designed to serve the interests of those in power.”

    As the debate over algorithmic governance continues to grow, it is crucial that policymakers and technologists alike work towards creating ethical guidelines for the development and use of these powerful tools. Ensuring transparency, accountability, and user privacy are essential components in preserving a free and open internet that fosters informed public discourse and upholds democratic values.


  • The Digital Inquisition – How social algorithms shadow-ban “fringe” beliefs.

    The Digital Inquisition – How social algorithms shadow-ban “fringe” beliefs.




    The Digital Inquisition – How social algorithms shadow-ban “fringe” beliefs

    The Digital Inquisition – How social algorithms shadow-ban “fringe” beliefs

    As the world becomes increasingly digital, concerns about online censorship have grown. Recent studies suggest that social media algorithms are inadvertently or intentionally suppressing content that doesn’t fit a certain narrative.

    • Fringe beliefs at risk: A study by Stanford University found that Twitter and Facebook’s algorithms were more likely to shadow-ban or suppress conservative voices, often labelling them as “hate speech” or “misinformation.”
    • The algorithms are biased: Research suggests that AI-driven algorithms are trained on historical data and can perpetuate existing biases. This means that marginalized communities may have their voices amplified while others may be silenced.
    • Impact on free speech: As the digital sphere becomes a crucial platform for sharing ideas, these algorithmic decisions could significantly curtail free speech. According to a Cato Institute report, “these private gatekeepers are exercising a powerful influence over the public sphere.”

    “The algorithms we use are not objective, they’re not neutral. They reflect the biases of those who designed them.” – Dr. Deborah Elizabeth Lipstadt, Professor at Emory University and expert on online hate speech.

    In response to these concerns, social media companies have taken steps to address algorithmic bias. Twitter has implemented measures to prevent bias in its algorithms, while Facebook has established a Center for Safety and Technology to promote transparency.

    Finding balance:

    • Ethical AI development: Companies are working on creating more ethical AI models that can detect bias and correct it. This could involve training algorithms on diverse datasets or incorporating human oversight.
    • Accountability measures: Social media platforms must prioritize transparency and accountability for their algorithmic decisions, allowing users to challenge and appeal these decisions.

    The Digital Inquisition is a pressing issue that requires a nuanced approach. By acknowledging the limitations of AI-driven algorithms and implementing measures to promote fairness and transparency, social media companies can work towards creating a more inclusive digital sphere.