Tech Tactics
When Time Itself Fails: NavIC's Atomic Clock Crisis
During the Kargil war in 1999, the Indian armed forces felt the need for targeting high altitude positions occupied by the enemy with precision-guided munitions. These munitions from platforms such as Bofors artillery guns and then a prototype avatar – the Pinaka Multiple Barrel Rocket Launchers (MBRL) – needed precise coordinates that could only be supplied by a Global Navigation Satellite System (GNSS) and the absence of which relegated their accuracy to what could be achieved with forward observer-based corrections and other imagery and reconnaissance assets and their inputs. Since at that time India did not have a sovereign satellite navigation system of its own, it ultimately led to a major learning for the country’s defence establishment that a GNSS was a necessity for modern warfare. To address this necessity, India’s space agency, the Indian Space Research Organisation (ISRO) was tasked with developing an Indian Regional Navigation Satellite System (IRNSS), which was the first step towards a GNSS.
The first satellite in the IRNSS constellation, designated NavIC, was IRNSS-1A and was launched in 2013. Since then 11 satellites have been launched to sustain NavIC, some of which were partial failures or just running out of their operational life.
As of April 2026, only 3 satellites in the constellation are able to fulfill their position, navigation, and timing (PNT) function in the correct orbit, which is below the minimum requirement of at least 4 satellites for reliable PNT data streams for most military and civilian use cases.
At the heart of every navigation satellite ticks at least one atomic clock, a device so precise it loses less than a second over millions of years. When that clock stops, the satellite goes blind. On 13 March 2026, the atomic clock aboard IRNSS-1F failed. Each NavIC satellite was designed with three or four atomic clocks for redundancy; if one failed, the system could switch to another. In the case of IRNSS-1F, all its atomic clocks eventually stopped working. The satellite had already lost two of its three rubidium clocks years earlier and was limping on a single backup when that final clock gave out.
The first NavIC satellites carried Swiss rubidium clocks supplied by SpectraTime. IRNSS-1A, 1C, 1D, 1E, 1G, and eventually 1F suffered clock failures. In five of these satellites, all three clocks stopped working. This was not an isolated Indian problem, the European Space Agency (ESA) investigated similar failures in its Galileo navigation satellite constellation in 2017, with short circuits during ground testing being identified as a possible cause. But for India, the cumulative damage is more severe.
The situation was made worse by a parallel failure earlier in 2026. NVS-02, a new-generation NavIC satellite launched in January 2025 to replace one of the older satellites, was unable to reach its final orbit due to an electrical malfunction. A connector failure disrupted the signal to the pyro valve, preventing fuel flow. This meant the satellite launched to begin replenishing the depleted constellation was itself stranded in a transfer orbit, adding to the crisis rather than alleviating it.
The one genuinely positive development in an otherwise troubled story is the development of India’s own atomic clock. Work on an indigenous rubidium clock began in the mid-2010s. By 2022–2023, the Indian Rubidium Atomic Frequency Standard (IRAFS) was fully qualified. ISRO’s Space Applications Centre in Ahmedabad led the effort, producing a clock that filled a major capability gap on a very short timeline.
The Automated Shipyard: How Welding is the Lowest Hanging Fruit for Robotic Optimisation in the Industry
Shipbuilding occupies an important position in the global industrial hierarchy; it is simultaneously one of humanity’s oldest manufacturing disciplines and one of the last major industries to have resisted the full force of automation. Not for lack of ambition does this gap exist, but because of the genuine engineering difficulty of applying consistent, repeatable machine processes to objects the size of city blocks that are each, in their own way, bespoke.
One of the biggest hurdles shipbuilders face is a major shortage of skilled workers, especially experienced welders. Robotic welding systems may help fill the gap, taking on tasks that previously required hundreds of welders.
Welding has always been an essential skill in 20th- and 21st-century industrial processes and was a highly valued trade in the collective West even during the heydays of the white collar IT and software boom. Despite the rise of assembly line robots in production environments such as automotive or digital device manufacturing and assembly plants where everything rolling off them is essentially copies of a make and model with little or no variance among the units.
However, in shipbuilding, every parameter for even the same class of ship has the potential for differences for which an automated system may not have the dynamic adaptability that can be trusted for the very low tolerances of safety and reliability required. While generative AI and especially agentic AI can to some extent address this, reliability remains the greatest challenge in Large Language Model (LLM) deployment even in the much lower stakes white collar industries.
An agentic AI model, if given training data for a professional tradesman welder’s journal of how they adapted certain metals for different sizes of hulls across shipbuilding projects at different shipyards, the welding torches available, their operating temperatures, the malleability of the steel or titanium blocks or sheets they are working with, the geometric nature of the joint and application to the overall hull are all highly dynamic and human intuitive features that can be hard to feed into a prompt or otherwise communicate to the AI agent. Also, what may be ubiquitous common sense to a human for the machine it may be an unnecessary inefficient step worth skipping, especially if corporate cost cutting and time optimisation are influencing the agent and AI model at any level.
However this is not to say that efforts are not being made to adopt AI in the ongoing effort to unlock efficiencies. In the US, Huntington Ingalls Industries Inc. which is one of its few large shipbuilders, has been open about its exploration of AI-enabled automation in various roles across their value chain.
The issue of productivity is particularly relevant in the Indian context as the country is located in what is perhaps the most coveted maritime geography of the Indo-Pacific and has a large coastline with several state- and privately-owned shipyards. However, retaining trained and experienced high-skilled labour has been a challenge for them all, which has delayed both shipbuilding and refit orders.
To alleviate these labour issues. automated welding systems can be a lifesaver as it can reduce engineering time while improving consistency and lowering rework rates. In yards where complex naval platforms may involve tens of thousands of welds, each defect avoided reduces downstream block-fitting corrections and schedule slippage.
The Silicon Squeeze: Why an AI Future may mean Bottlenecks for other Sectors
The global general purpose computing component market is increasingly being shaped not just by economics or technology, but by geopolitics or rather the geo-economics of modern high-tech industry, as strategic competition and national policy begin to determine how and where critical components are produced, allocated and restricted, transforming what was once a predictable industrial cycle into a more fragmented and contested system.
Export controls imposed by the US on advanced semiconductor manufacturing capabilities, combined with efforts by China to build domestic capacity under constraint, have introduced a layer of political risk into supply chains that were historically optimised for efficiency, and this shift is now intersecting with a deeper structural changes inside the industry itself.
At the centre of this change, at least for the flash and memory sub sector. are three companies that underpin most of the world’s memory and flash storage production, Samsung Electronics, SK Hynix and Micron Technology. Together, they have begun to redirect their most advanced fabrication capacity away from conventional memory chips and towards high-bandwidth memory, a specialised and far more profitable category designed to support AI systems operating at a massive scale.
This shift is not a reduction in output but a redefinition of it because semiconductor fabrication operates within fixed physical and economic limits, meaning that every wafer committed to high-bandwidth memory effectively means one wafer less available for the chips that power smartphones, personal computers, vehicles, and industrial systems, creating a redistribution of supply that is being felt across the global economy rather than a simple shortage caused by underproduction.
The demand driving this reallocation comes from a concentrated group of technology firms, including Google, Microsoft, Meta, and Amazon, which are investing heavily in AI infrastructure and have adopted procurement strategies that prioritise guaranteed access to supply over cost considerations, effectively absorbing available capacity and signalling to manufacturers that high margin, AI-oriented memory will remain the most rational allocation of resources for the foreseeable future. And it’s not only limited to memory, as the fabrication facilities for flash storage, traditional hard drive storage solutions, Graphic Processing Units (GPU) and processors are all equally stressed.
For other sectors, the consequences are increasingly visible as prices of standard memory and other general-purpose computing components rise and availability tightens, placing pressure on consumer electronics manufacturers and industrial producers that rely on stable input costs and predictable supply, resulting in reduced device shipments, delayed production schedules, and compressed margins across industries that have limited ability to replace these components.
The traditional self-correcting mechanism of the memory market, in which high prices trigger rapid capacity expansion and eventual stabilisation, is still theoretically present but now constrained by the long timelines and immense capital requirements associated with building semiconductor fabrication plants, as new facilities planned by companies such as Micron Technology and SK Hynix are not expected to reach meaningful production levels until the latter part of the decade, and much of that future capacity is already aligned with focussed AI demand rather than the broader market.
The geopolitical overlay further complicates any adjustment because export controls, trade tensions, and industrial policy interventions are deciding not only where new capacity can be built but also who can access it, introducing uncertainty into long-term planning and reinforcing a system in which supply is increasingly segmented along strategic lines rather than distributed according to purely market-driven signals.
The global general purpose computing component market is increasingly being shaped not just by economics or technology, but by geopolitics or rather the geo-economics of modern high-tech industry, as strategic competition and national policy begin to determine how and where critical components are produced, allocated and restricted, transforming what was once a predictable industrial cycle into a more fragmented and contested system.
Export controls imposed by the US on advanced semiconductor manufacturing capabilities, combined with efforts by China to build domestic capacity under constraint, have introduced a layer of political risk into supply chains that were historically optimised for efficiency, and this shift is now intersecting with a deeper structural changes inside the industry itself.
At the centre of this change, at least for the flash and memory sub sector. are three companies that underpin most of the world’s memory and flash storage production, Samsung Electronics, SK Hynix and Micron Technology. Together, they have begun to redirect their most advanced fabrication capacity away from conventional memory chips and towards high-bandwidth memory, a specialised and far more profitable category designed to support AI systems operating at a massive scale.
This shift is not a reduction in output but a redefinition of it because semiconductor fabrication operates within fixed physical and economic limits, meaning that every wafer committed to high-bandwidth memory effectively means one wafer less available for the chips that power smartphones, personal computers, vehicles, and industrial systems, creating a redistribution of supply that is being felt across the global economy rather than a simple shortage caused by underproduction.
The demand driving this reallocation comes from a concentrated group of technology firms, including Google, Microsoft, Meta, and Amazon, which are investing heavily in AI infrastructure and have adopted procurement strategies that prioritise guaranteed access to supply over cost considerations, effectively absorbing available capacity and signalling to manufacturers that high margin, AI-oriented memory will remain the most rational allocation of resources for the foreseeable future. And it’s not only limited to memory, as the fabrication facilities for flash storage, traditional hard drive storage solutions, Graphic Processing Units (GPU) and processors are all equally stressed.
For other sectors, the consequences are increasingly visible as prices of standard memory and other general-purpose computing components rise and availability tightens, placing pressure on consumer electronics manufacturers and industrial producers that rely on stable input costs and predictable supply, resulting in reduced device shipments, delayed production schedules, and compressed margins across industries that have limited ability to replace these components.
The traditional self-correcting mechanism of the memory market, in which high prices trigger rapid capacity expansion and eventual stabilisation, is still theoretically present but now constrained by the long timelines and immense capital requirements associated with building semiconductor fabrication plants, as new facilities planned by companies such as Micron Technology and SK Hynix are not expected to reach meaningful production levels until the latter part of the decade, and much of that future capacity is already aligned with focussed AI demand rather than the broader market.
The geopolitical overlay further complicates any adjustment because export controls, trade tensions, and industrial policy interventions are deciding not only where new capacity can be built but also who can access it, introducing uncertainty into long-term planning and reinforcing a system in which supply is increasingly segmented along strategic lines rather than distributed according to purely market-driven signals.
What is emerging is a computing components market that no longer behaves like a cyclical commodity system but instead functions as a strategically allocated resource in which capital, manufacturing capability, and output are being channelled toward a narrow set of high priority applications, leaving the rest of the global economy to operate within the constraints of what remains available.
The Localisation Wager: Why India's LLM Ecosystem is Local Language Centric
The divide in India’s AI adoption is often described in terms of access, but the more consequential fault line runs through language, shaping who can meaningfully use these systems and who remains excluded despite their nominal availability. Global platforms such as ChatGPT, Claude and Google Gemini have achieved deep penetration among English-speaking urban professionals, yet beyond that segment usage drops, not because devices or networks are missing, but because the systems themselves do not operate with sufficient fluency in the languages that structure everyday life across much of the country.
The limitation is not a question of technical sophistication, since these models impressively perform the tasks they are designed for; rather it is a question of how language is represented within them and how that representation translates into real-world use. Training data that is overwhelmingly drawn from English language internet sources, even when supplemented by curated samples in Hindi, Tamil, or Bengali, does not produce systems that can navigate the idiomatic, cultural, and domain-specific complexity of those languages as they are used in governance, agriculture, law, or informal commerce. What appears as multilingual capability in a benchmark setting often reveals itself as a thinner layer of translation when applied at population scale.
The consequences of that gap become visible in practical settings, where a system that can generate grammatically correct Hindi may still fail to convey the meaning of a government welfare scheme in the terms that a farmer in Uttar Pradesh can understand and act upon, or where a voice interface that cannot interpret Marathi agricultural vocabulary remains effectively unusable in rural Maharashtra while functioning adequately in an urban setting. The distinction is not semantic but structural, because it determines whether AI serves as a tool of inclusion or remains confined to a narrow slice of the population.
India’s policy response has begun to treat this as a foundational problem rather than a marginal one. The IndiaAI Mission, implemented through the Ministry of Electronics and Information Technology, has been designed as a national effort to build domestic model capability, supported by subsidised access to computing infrastructure that lowers the cost barrier for local developers. By making tens of thousands of graphics processing units available at controlled rates, the programme attempts to shift the constraint from capital access to execution.
Within this framework, Sarvam AI has emerged as a central actor, developing LLMs trained entirely on Indian infrastructure and optimised for deployment under local economic conditions. Its systems employ architectural approaches intended to reduce the cost of inference, reflecting the reality that population-scale usage in India cannot sustain the same per query economics as enterprise deployments in wealthier markets. Its document intelligence tools, which perform strongly on tasks involving multiple scripts and complex layouts, point toward applications that align more closely with administrative and public service needs than with the English language content generation that dominates global use cases.
A broader ecosystem is taking shape alongside it. IIT Bombay has supported BharatGen, a multilingual model initiative trained across a wide range of Indian languages with an emphasis on legal and cultural contexts. Gnani.ai has developed voice systems capable of replicating speech patterns across multiple languages from minimal input, addressing the challenge of building conversational interfaces in a linguistically diverse environment.
The constraint that runs through all these efforts is the depth of capital required to sustain them at the highest levels of global competition. Investment in Indian AI startups remains modest when compared with the scale of spending by large technology firms elsewhere, where individual projects can command resources that exceed national-level programmes. The IndiaAI Mission provides a meaningful foundation for domestic capability but it does not attempt to match the pace or scale of frontier model development internationally.
What is taking shape instead is a different strategic objective, one that prioritises building systems capable of operating across India’s linguistic diversity, institutional frameworks and economic constraints, ensuring that AI can be deployed in public services, agriculture, healthcare and law at a scale that imported systems cannot easily achieve. This is not a race to build the most powerful model in the world but an effort to build models that understand India well enough to matter where it counts, a narrower goal perhaps, but one that carries its own form of strategic weight.
Check these out:
1. The AI revolution in the semiconductor industry: Driving Growth and Innovation. Accenture, 16 March 2025. https://www.accenture.com/in-en/blogs/high-tech/ai-revolution-semiconductor-industry
2. How Semiconductors Are Made (And Fuel the AI Boom), YouTube channel Super Data Science, 1 April 2025. https://www.youtube.com/watch?v=sUYN-1bCS2Q
3. Hold on to Your Hardware. https://xn--gckvb8fzb.com/hold-on-to-your-hardware/
4. Guenter W. Hein, 2020. Status, perspectives and trends of satellite navigation
5. Navigation tech for the future of mobility. Satell Navig 1(1): -22. August. doi: 10.1186/s43020-020-00023
6. Yanbo Zhang, Sumeer A, Khan, Adnan Mahmud, Huck Yang, et al., 2025. Exploring the role of large language models in the scientific method: From hypothesis to discovery. NPJ Artificial Itelligence, Article no. 14(2025).
7. Vit Ruzicka, 2025. AI On-board of Satellites, Towards Autonomous Scientific Instruments. YouTube, 4 November 2025. https://www.youtube.com/watch?v=6le9sNvBa1k
Thank you for taking the time to read Tech Tactics this month. Stay tuned for our next
edition, where we will continue to explore the relationship between global high-tech value chains and geopolitics.


