Algorithms, Access, and Accountability: Unpacking CCI’s Market Study on Artificial Intelligence
- Mahadev Krishnan
- Dec 18, 2025
- 9 min read
Introduction
In 2025, the Competition Commission of India (CCI) published the first of its “Market Study on Artificial Intelligence and Competition” and India formally entered the international discussion of the regulation of algorithmic markets. The proposed study is timely, since nowadays artificial intelligence has already entered all spheres of the Indian economy, including e-commerce and healthcare, banking, insurance, and logistics. In addition to improving the customer experience through personalization, the ability of Artificial Intelligence (AI) to process high amounts of data and automatically execute the pricing processes has removed market boundaries and disintegrated the traditional market structures, which are the foundation of competition law.
Indian competition law has traditionally focused on overt forms of market concentration, such as acquisitions, mergers on the vertical and horizontal level leading to abuse of power. However, AI introduces new, less obvious risks, such as algorithmic opacity, data-driven monopolies, and self-learning systems that can influence output and prices without human guidance. Regulators are being forced by these developments to reconsider the ways in which dominance and collusion appear in digital ecosystems.
Hence, the CCI-AI market study is a proactive diagnostic tool which intends to determine the potential causes of distortions prior to their occurrence. It poses one important question: How can the Indian competition law safeguard the incentives to innovate and stop the creation of the AI-based anti-competitive advantages? The rise of AI in this new paradigm is a competitive parameter, a tool of technology, but it is the invisible hand that is quietly transforming the competitive landscape of businesses, consumer decision-making, and market evolution.
This blog examines, firstly, how the CCI maps the AI ecosystem and identifies structural sources of market power; secondly, how algorithmic collusion and data-driven discrimination are reshaping enforcement priorities; thirdly, the CCI’s call for algorithmic accountability and proactive compliance, including the need to confront operational gaps and unresolved shortcomings in auditability and oversight; and lastly, India’s evolving place in the global AI-competition policy framework.
Mapping the AI Ecosystem: Data, Compute, and Market Power
The market study of the CCI hereby indicates that the AI ecosystem has three essential layers; data, computation, and applications, which can each develop into competitive bottlenecks independently. Firstly, the firms that can access large and high-quality data sets can potentially keep improving the accuracy and efficiency of their algorithms on the data level. Secondly, since big players have control over state-of-the-art chips, cloud infrastructure and storage, they can execute complex models at scale at the compute layer, a feature that often is not available to smaller competitors. Lastly, because switching between AI providers can be extremely costly either due to high subscription exit fees or the technical difficulty of transferring large volumes of proprietary data-users often become locked into the platform. For instance, a business that has deeply embedded its entire workflow with a specific foundation model might discover that a transfer to a different provider necessitates the re-training of datasets, re-writing of code or re-construction of internal systems. These substantial switching costs imply that foundation models in application layers and AI tools have the potential to establish a firm and enduring user lock-in and get more and more challenging to attract or retain users.
Fixing the Detection Gap: Acquisitions and India’s Merger Control Future
The key issue identified by the CCI is that India has a weak merger control framework that can detect AI market consolidation. Conventional thresholds based on turnover or asset values do not reflect the acquisition of early-stage AI companies that may not have much revenue but have a highly valuable dataset, model or computing infrastructure. This poses a regulatory blind spot so that strategically relevant AI startups can simply be acquired without being notified. These acquisitions are commonly referred to as “killer acquisitions” in which larger companies can acquire upcoming innovators before they can turn into a competitive threat. This threat is much greater with AI, where competition is based on the ability to manage inputs of data reservoirs, compute capacity and underlying models, as opposed to controlling user-facing markets.
This pattern is supported by the acquisition trends in the Indian technological environment. The same consolidation is observable in the industry case applications of AI such as; Flipkart bought Liv.ai and Upstream Commerce in 2018, Amazon India bought Perpule in 2021, Swiggy has acquired Kint.io and Zomato has acquired TechEagle in the logistics and delivery sector to show how AI capabilities are now essential in making operations more efficient. The above are all indicators of a trend in which innovators in AI are being absorbed into bigger ecosystems, reinforcing the competitive standing of incumbents and forming input-level dependencies.
Comparable developments in foreign jurisdictions underscore the significance of these issues. The European Union’s Digital Markets Act (DMA) requires large digital platforms that act as gatekeepers to maintain open access to critical digital infrastructure and prohibits practices that may undermine competition. The DMA imposes specific requirements on laying down special responsibility on the “gatekeepers” - which are the big digital platforms that exercise control over key market infrastructure, i.e. data, app stores, or online intermediation services, to open equal access and avoid exclusionary practices. On the same note, The Organisation for Economic Co-operation and Development (OECD) has emphasized that competition authorities should consider “data and computation” as essential input in the digital economy; to preserve contestability, open access mechanisms and interoperability standards are to be put in place.
If merger control continues to rely exclusively on financial thresholds, acquisitions of data-rich or strategically positioned AI firms may escape regulatory review despite having the potential to reshape market structures. To respond to this evolving landscape, India will likely need to explore alternative notification triggers, including data-based thresholds, transaction value tests and targeted rules for AI and digital platforms. Such reforms would enable the CCI to capture early-stage acquisitions that pose long-term competitive risks and ensure that India’s AI markets remain open, contestable and innovation-friendly.
Algorithmic Collusion, Pricing, and Data-Driven Discrimination
The most notable issues voiced by the market study offered by the CCI were that the possibility of an increase in potential of algorithmic coordination could lead to AI specifically the coordinating of market outcomes through the price-setting algorithms even in the absence of overt firm collusion. These algorithms allow them to make independent changes in strategies in real time by constantly tracking the prices, market demand, and customer response of their competitors. This may result in what scholars have termed as “tacit algorithmic collusion”, which is a situation where the self-learnt algorithms, without any explicit agreement start imitating each other’s pricing patterns and behaviors, which ultimately leads them to a common pricing pattern and reduces competition.
The CCI further argues that this ecosystem is compounded by the emergence of dynamic and tailored pricing models. With granular consumer data, AI also allows splitting users based on their income, their purchasing behavior, or their perceived willingness to pay. This type of pricing can make consumers more efficient and enable more specific targeting of consumers, but also subjects consumers to data-driven discrimination, where they are accidentally offered worse conditions or charged more due to simply being profiled by an algorithm. In the sectors of e-commerce and travel and other digital service platforms, this type of asymmetry may be used to strengthen behavioral biases and reduce price transparency.
Cross-platform algorithmic vendors are also another threat and so are companies that provide pricing or analytics services to multiple competitors. Such vendors can also become the unwilling carriers of the information spillover, in which case the secret business information is clandestinely transferred to the competitors in the guise of algorithmic optimization. This form of indirect coordination may not be easy to spot using traditional antitrust methods that rely on the existence of explicit contracts or human motives.
However, the CCI study has prudently observed that not every algorithmic coordination or personalization is harmful. These mechanisms raise concerns only when they produce exclusionary effects, conceal exploitative pricing, or distort genuine consumer choice. International regulators have similarly emphasised this nuance. It is also acknowledged that the use of algorithmic tools is not in itself problematic, but people raise concerns about their operation when it affects the state of competitive conditions or misleads consumers in a way that undermines the fairness of the market.
This has also been flagged as a nuance by the international regulators. Although the Department of Justice, USA (DOJ) has recently begun to examine “AI-enabled hub-and-spoke” collusion via common data platforms, the Directorate-General for Competition (DG COMP) of the EU has highlighted the significance of transparency of algorithms in situations of predictive pricing. The CCI results are therefore an indication of an increasing world opinion that AI can be both an efficient and a collusive instrument. The future of enforcement is therefore not to ban technology but to make algorithmic markets auditable, accountable and contestable.
Governance and Compliance: CCI’s Call for Algorithmic Accountability
The CCI’s market study, by mandating internal competition-compliance audits for businesses implementing AI systems, signaled a dramatic shift from reactive enforcement to preventive governance. It recommended that algorithmic design, data-sharing agreements, and the potential for anti-competitive outcomes, like information spillovers or exclusionary effects, be the main topics of internal audits. The study focuses on how upstream competition protections need to be incorporated into the very architecture and implementation of AI as it is used more and more to determine pricing, recommendations, and resource allocations.
The challenge on the operationalization of algorithmic audits. Still, numerous questions remain to be answered: Who will audit algorithmic behavior? How would openness be affected without the loss of trade secrets? What are the technical requirements that will be applied to algorithmic explainability? How can firms provide sufficient transparency without compromising commercially sensitive information or exposing proprietary code? What standards will govern algorithmic explainability, and how will regulators determine whether an AI system’s internal logic is sufficiently interpretable to rule out anti-competitive effects? These are not just abstract questions but they highlight the reality on the ground of the challenge of imposing accountability in markets that are influenced by self-learning and black box algorithms. Lack of direction leaves both regulators and business in doubt since companies can either be less compliant because of the ambiguity or comply more because they are afraid of being scrutinized by the regulators.
This gap is particularly problematic because effective algorithmic audits require a combination of legal understanding, technical expertise, and independent oversight capacities that many companies and enforcement agencies are still developing. In the absence of a well-established audit framework, this AI behavior will go unchecked, and the well-intentioned or pro-competitive AI applications will be frozen at the very thought. For instance, a platform might not adopt dynamic pricing tools at all due to the risk that the CCI will eventually conclude that the use of the tools was evidence of algorithmic collusion, despite the fact that these tools might have been more efficient and beneficial to consumers.
One of the weaknesses of the current approach is that the recommendations of the CCI are more of an ideal. They recognize that there must be accountability of algorithms, but do not establish minimum standards of the audits, documentation or explainability. Such imprecision will lead to the emergence of an unequal compliance situation in which only the most well-resourced companies will be able to assess their algorithms in any meaningful way, and the rest will be in a grey zone. Based on the OECD model of AI accountability, the CCI seems to gravitate towards a less strict regulatory framework that focuses on ethics, self-regulation and gradual change as opposed to hard-law regulations. Although this flexibility provides industry with time to adapt, it also reveals the necessity of gradual transition to more tangible norms.
The tools of enforcement that the CCI has been implementing may grow in the future to include both structural, such as directives guaranteeing access to data or divestiture in situations of undue dominance, and behavioral, such as internal firewalls or algorithmic disclosures. Along with this shift, the CCI has now become a digital governance entity, as well as a competition enforcer, to guide Indian markets towards ethical innovation, and long-term algorithmic responsibility.
Conclusion
The debate surrounding AI and Competition in the world has ceased to be theoretical and shifted into the realm of policy. The CCI’s position between these two strategies is presented in the AI Market Study by CCI. In the absence of any stringent regulation, it utilizes a new, flexible structure that recognizes the risks of algorithmic concentration. The report will strive to develop a system to guarantee free and competitive access to information, calculating, and markets to achieve fair and responsible AI-led development.
The next step will be to incorporate these findings in the algorithmic audits, revised merger thresholds on the data-based transmissions, and cross-regulator collaboration with the Data Protection Board. These aspects in combination may form the so-called smart compliance, balancing accountability and freedom of innovation. India will be robust in anticipating the dangers that will not only dictate market share but also its dominance of data, code, and computation as the world markets converge on digital power. This will help cement India as a responsible AI watchdog agency.
Comments