Algorithmic Cartels: Can India’s competition law catch invisible agreements?
min read
7
Harshith Viswanath
4/2/26, 4:51 pm
Introduction
The world is undergoing the fourth industrial revolution, where Artificial Intelligence (“AI”) is not only a technological innovation but is a structural force shaping modern digital markets. A recent study by the Competition Commission of India (“CCI”) on AI and its effects identified risks, analyzed regulatory frameworks (US and UK), and provided recommendations.
One of the major challenges identified by the CCI from the study was algorithmic collusion, where firms achieve cartel-like outcomes by using self-learning algorithms powered by AI. These algorithms learn from a wide range of data sources such as general information, competitors’ prices and market trends.
This phenomenon creates a fundamental tension in competition law under Section 3(3) of the Competition Act, 2002 which requires an “agreement” for cartels to be penalized under the law. This piece will argue that India’s framework under the Competition Act, 2002 is ill-equipped to combat self-algorithmic collusion and recommend reforms that could prevent self-learning algorithmic collusion. For the scope of this article, algorithmic collusion refers to collusion caused by self-learning/machine-learning algorithms.
What is Algorithmic Collusion?
A pricing algorithm refers to a set of instructions that are fed into a computer program to make decisions regarding the pricing of products. Algorithms would accomplish this by analyzing extensive data on market conditions, such as competitor’s prices, past prices and firm’s costs for warehousing and production. Companies use pricing algorithms to make pricing dynamic, personalized and competitive. Online platforms such as Uber, Ola and Amazon utilize such algorithms.
Pricing algorithms can be used by enterprises to collude by fixing market prices and exchanging information. This problem is amplified by self-learning algorithms that adjust prices automatically based on extensive data analysis.
This leads to tacit collusion, which is very difficult to prove, and prosecution can be avoided because under Section 3(3) of the Competition Act, 2002 there needs to be a human element in decision-making. Section 3(3) is rendered obsolete because there is no need for a human-driven agreement.
Critical Analysis: Why does India’s framework fail to capture algorithmic collusion?
India’s framework fails to capture algorithmic collusion because of 3 challenges which are:
Human-Centric Agreement
Section 3(3) of the Competition Act, 2002 is doctrinally anchored in human intention and meeting of the minds. Under Section 2(b) of the Act an agreement is defined broadly to include “any arrangement, understanding or action in concert”. However, courts have repeatedly required evidence of a meeting of minds.
In Re M/s Sheth & Co, the CCI held that mere parallel conduct is not sufficient to establish collusion. Parallel conduct must be supported by “plus factors” which are defined as economic actions beyond parallelism that suggest coordinated action. The usage of such plus factors by the court to evaluate collusion indicates that the court views collusion being rooted in an anthropocentric model. Section 3(3) is built on the premise that the actors party to collusion have intent and agency.
This premise is further solidified in the Maruthi Suzuki case, where the court held that anti-competitive agreements often lack formal documentation and are encompassed by informal agreements through subtle gestures. A joint reading of these two cases proves an underlying assumption of human interactions when dealing with collusion. Plus factors such as informal agreements and gestures are an attempt to bridge the gap between parallel conduct and intentional coordination.
In algorithmic collusion, coordination may emerge without any human communication as these algorithms can pattern-match and exchange information with other algorithms. In such a scenario, meeting of the minds does not occur between humans but between algorithms exposing a blind spot that disrupts the method the Act uses to recognize collusion.
Problem of Plus Factors
The CCI has, through its judgements, such as Re M/s Sheth & Co and Maruthi Suzuki distinguished between conscious parallelism (legal) and collusion (illegal). The courts have drawn a distinction between parallel conduct and collusion through plus factors. These plus factors act as additional evidence to corroborate coordination, such as sharing or communication of data.
This evidentiary approach is illustrated in the Cement Manufacturers case, where the CCI found that the sharing of sensitive commercial information facilitated collusion. This established a concerted effort to fix prices. In this instance, the court rejected oligopolistic interdependence because of coordinated information exchange. The plus factor in this case was not the price similarity itself but rather the human-mediated communication that sustained such similarity. However, this framework becomes strained in the case of self-learning algorithms.
Self-learning algorithms will not satisfy the condition of traditional plus factors evolved by courts, as they do not require information sharing because they adjust themselves based on the market conditions. These algorithms do not require communication, gestures or informal agreements as they are designed with the objective of profit-maximization and thus are able to continuously adapt to rival behavior and market signals. As a result, the reliance on traditional plus factors rooted in human behavior renders the law obsolete in AI-driven markets.
Enforcement and Evidentiary Challenges
Machine-learning and deep-research pricing algorithms operate as opaque black boxes. These algorithms are called “black boxes” because they do not operate like traditional algorithms that are coded on the basis of rules. These systems derive pricing strategies from data and are derived through a hidden layer of neural networks. These algorithms also generate massive, high-frequency datasets. In industries such as e-commerce millions of price changes occur on a daily basis.
This creates enforcement and evidentiary challenges for the CCI. While investigating cartels the CCI usually relies on direct and circumstantial evidence for proof of collusion. However, direct evidence is absent as these algorithms are self-learning and there is no human coordination for such an agreement. The CCI has repeatedly emphasized in cases like Re M/s Sheth & Co that mere parallel conduct cannot be evidence of collusion.
The CCI lacks the investigative toolkit to assess such algorithms as they generate massive, high-frequency data sets that manual or traditional review cannot handle. Collusive patterns might occur during the intra-day period. A meaningful analysis of such datasets requires continuous and automated data collection that CCI lacks. The CCI does not have statutory power to conduct algorithmic audits, access training data, interrogate model architecture and employ computational tools at scale to combat algorithmic collusion.
Algorithmic collusion raises a deeper issue about the attribution of liability under the Competition Act, 2002. Section 3 of the Competition Act, 2002 assumes an agreement between enterprises that is a human-centric concept. When collusion occurs due to self-learning algorithms the question that arises is whether the company, developer or the algorithm should be held liable. This creates a doctrinal vacuum under Section 3 without clear attribution of liability when algorithmic collusion occurs. In the absence of investigative capacity and clear attribution of liability algorithmic collusion risks falling into an evidentiary and enforcement blind spot under Section 3 of the Competition Act, 2002.
Recommendations for the CCI and Lawmakers
The CCI must employ a multi-pronged approach to prevent algorithmic collusion due to usage of machine-learning models. The multi-prong approach would include the following:
Self-Audit of AI systems for compliance
The Competition Act, 2002 should statutorily mandate regular internal audits of algorithms by businesses. These audits have to be documented to understand the algorithm’s decision-making process, objectives, and the data source it draws from. This would act as a proactive measure to identify potential competition concerns. The audits would have to be conducted bi-annually to ensure the algorithm’s adaptive behavior has not resulted in anti-competitive conduct.
The self-audit should be in the form of a checklist that has to be submitted to CCI on regular intervals. This checklist would involve testing for specific risks, such as whether the algorithm adjusts itself based on the price of competitors or if it could adjust itself based on the rival’s algorithm.
Expansion of CCI’s investigative powers
The CCI’s investigative powers should be expanded to under empirical and technical audits. This would include increasing the data-gathering powers of the CCI. Machine-learning algorithms are usually considered “black boxes”. Business entities should be required to disclose details such as (1) Design logic of algorithms, (2) Data Sources for training, and (3) Parameters for price setting.
This would allow the CCI to conduct risk assessments. Similar to Article 6 of the European Union’s (EU) AI Act, pricing algorithms should be classified as “high-risk” systems. This expansion of investigative powers, coupled with self-audits would assist the CCI in effectively identifying anti-competitive algorithms.
Adoption of an effects-based standard
India should move towards an effects-based model, where liability is triggered by demonstrable market harm rather than proof of an agreement and meeting of the minds. The CCI will assess the nexus of algorithmic misconduct by evaluating factors such as price stability, output limitation, and the nature of the algorithm based on the previous two recommendations.
This approach prioritizes outcome over form and would require the CCI to investigate the algorithm itself. The legal test applied would evaluate the effect on the markets rather than assessing whether there was a meeting of the minds for such an agreement.
Conclusion
Algorithmic collusion challenges the very architecture of Indian competition law. As market coordination moves away from human intent to machine logic, the definition of an “agreement” under Section 3 is rendered obsolete. Machine-learning models pose the risk of collusion but the law will fail to recognize such collusion if it remains static.
The Competition Act, 2002 must evolve from an intent-based regime to an effects-based standard that prioritizes the outcomes over form. In other words, the law will have to evaluate whether certain algorithms create anti-competitive markets rather than relying on a meeting of the minds for the formation of an anti-competitive agreement.
Ultimately, the challenge for the CCI is not to slow AI-driven markets, but to future proof antitrust enforcement. In an economy that is increasingly driven by machine-learning algorithms, technology should not be used as a tool for undetectable collusion rooted in anti-competitiveness.
About the Author
Harshith Viswanath is a law student at NALSAR University of Law, Hyderabad, exploring the intersection of competition law and artificial intelligence.
Global Lessons in Taming the Titans: Regulate, Litigate or Reconcile?
Deepika Kapoor
As digital giants tighten their grip on global markets, regulators face a crucial choice: regulate in advance or litigate after harm occurs. Comparing the EU’s ex-ante Digital Markets Act, the US’s ex-post litigation model, and India’s cautious middle path, this piece argues that the real solution lies not in picking sides but in designing a balanced, adaptive framework that protects competition without stifling innovation.
CCI’s Traditional IP Approach vs. Lina Khan’s Disruption: A Comparative Commentary
Kumar Shubham
This blog compares the FTC under Lina Khan and India’s Competition Commission in navigating the IP–antitrust interface. It argues that Khan’s structural, effects-based approach treats IP as a potential tool of exclusion in digital markets, while the CCI remains cautious under Section 3(5) of the Competition Act. Through cases like Amazon and Google, it highlights divergent regulatory philosophies shaping platform governance.
Quiet Deals, Loud Consequences: Mediation Risks in India’s Competition Law
Ananya Singh and Nandini Srivastava
With the introduction of settlement and mediation under Section 48A of the Competition Act, India’s antitrust regime has entered a new phase. While mediation promises efficiency and reduced litigation, it also risks concealing anti-competitive conduct if left unchecked. This article examines recent judicial developments, global practices, and the need to balance private settlements with effective competition enforcement.