DC Pathak: (The writer is a former Director Intelligence Bureau)
Of all the matters validating strategic cooperation between India and the US from a long-term perspective, per-haps the most important point in the current situation—that might have gone somewhat unnoticed by the ana-lysts—is the pledge of common resolve of President Joe Biden and Prime Minister Narendra Modi to keep up commitments on safety, security, and trust with regard to Artificial Intelligence (AI).
The US is already working with seven leading AI companies, including Google, Microsoft, Amazon, and Meta, to make sure that AI applications are built up as a safe and trustworthy instrument for lifting humanity at large.
It goes to the credit of Prime Minister Modi, who in his recent visit to the US took up the AI issues in an in-depth manner during his talks with President Biden, with the full realisation that AI was going to affect every-one’s life.
The Science Advisor to President Biden, Arati Prabhakar, of Indian origin, announced that Indo-US cooperation would boost the ability to deal with AI’s harms and start using it for good.
There is an implied acknowledgement of the great reality that Information Technology can be used with equal effectiveness as a weapon of combat in ‘information warfare’ and other covert operations as well as a means of spreading subversion and radicalization.
The potential for misuse includes malware injection technologies, data manipulation, forgery, cyber-attacks, and terrorism. On the other hand, AI-powered cyber security solutions in coordination with human intelligence can be extremely useful, particularly when dealing with large amounts of data. The AI can analyse this data to find patterns and anomalies and possibly detect the modus operandi of the adversary’s operation. A sobering thought is that of all the digital data in the world, 90 percent has been created in the last two years. Given the speed with which information is generated, the protection of personal data is an emerging challenge.
Smart computer systems are becoming increasingly adept at remembering and reading what we, as people, would be doing—this includes skills of ‘looking’, ‘listening, or ‘speaking’. And they learn to discover patterns and rules from huge amounts of data, which can even give them an upper hand in some areas of human activity.
AI systems are faster, are never tired, and have a built-in capacity to learn from examples. They are better able to recognise art forgery and detect dementia before a medical specialist could consider that option and predict diabetes. The predictive value of AI is very extensive within the input-output paradigm, which has remained its defining feature.
Amazon is said to have taken to ‘predictive shipping, whereby they would be able to send you a package be-fore you even knew you wanted it. AI does appear to be overriding the limitations of the input-output principle while creating new products and services.
An area of concern regarding AI is that if ‘automated decision-making systems’ are fed discriminatory data, they will reproduce the bias of the input reflected in the choice of algorithm and yet have the advantage of falsely inspiring more belief because of the human nature of considering these systems trustworthy. This bias can come into play in the area of ‘predictive policing, where the vulnerable in society could face an undeserved disad-vantage on account of a contaminated data set traceable to hostility of a certain kind from data providers. On the other hand, an AI company can use its resources to produce clearly defined profiles of people with greater preci-sion, which can be used for political purposes.
It is a measure of the apprehensions about the possible misuse of AI that governments across the world are already seized with the issue of putting in place laws and restrictions to regulate AI operations.
The Telecom Regulatory Authority of India (TRAI) has recommended the creation of an independent statutory body to ensure the right development of AI across sectors. It wanted the adoption of an ethical code by both pub-lic and private entities.
TRAI has used the ethical data as a major concern for the government as well as corporate entities. It has to be understood that AI-powered national security systems run the risk of hacking or manipulation by adversaries with disastrous consequences. AI is effectively used in rockets, missiles, aircraft carriers, naval assets, and other automated defence systems. Creators of AI need to know that the new technology could also be used by the ene-my to indoctrinate young minds and raise agents of terror, including ‘lone wolves’. On the other hand, AI-based systems can be used proactively to detect whether a website or email is a phishing trap. In short, the inevitable use of AI brings its own challenges spanning the ethical and regulatory realms.
Major powers like the US and China are investing heavily in creating AI-based systems in their search to main-tain a military lead. AI is being used for preparations needed for the future battlefield.
Meanwhile, AI’s wide applicability in almost every sector has permeated human lives, ranging from the ser-vice sector using voice assistants like Alexa, Siri, and OTT platforms to health care, agriculture, climate change, and financial spheres. However, its immense potential in the areas of security and defence is what is attracting the attention of policymakers and defence analysts.
Intelligence, surveillance, and Reconnaissance (ISR), cyber security, military logistics, and in particular Le-thal Autonomous Weapons systems (LAWS), have acquired newfound importance because of AI, as has image clarification from drone footage and geospatial data analysis.
In the military domain, an area of concern is that AI is providing new autonomous and affordable capabilities to a wide range of actors. AI has given weak states and non-state actors more options to enhance their capabilities and, in the process, strengthened asymmetric warfare possibilities. Too much AI development based on the pre-sumption of its potential positives should not make people oblivious to its negative side, including, in particular, the danger posed to national security itself.
The risk of AI Chatbots influencing young minds vulnerable to neurodivergence to become terrorists is real. Greater transparency has to be demanded of AI technology companies, and this would include identifying the personnel responsible for checking the guardrails. These members could themselves become a source of threat on account of some vulnerability they were suffering from; they should rightly be kept in the purview of a func-tioning internal vigilance system that all sensitive organisations are expected to have.
Advancement in AI is expressed mainly in ‘machine learning, which can enable a high degree of automation in otherwise labour-intensive activities such as satellite imagery analysis and cyber defences.
AI will affect national security generally while driving military and information superiority because the ad-versary could be using the same AI operations to damage the other side.
In 2020, the US National Commission on AI recommended that the US form a US-India Strategic Tech Alliance (USISTA) to develop an Indo-Pacific strategy on emerging technologies, considering India’s enhanced geopolitical standing.
The India-US 2+2 dialogue has called for strengthening bilateral partnerships on these technologies, particu-larly in the field of energy.
The Quad Summit of 2022 flagged cooperation in the sphere of AI. Advances in AI will progressively multiply threats, challenges, and opportunities from a national security perspective. India, therefore, needs to create a supportive AI ecosystem.
Prime Minister Modi’s visit to the US has paved the way for lasting cooperation between the two countries in the best interests of both sides.
India is well ahead on its AI journey, and AI-enabled projects in defence are getting priority. An AI-based Signal Intelligence Solution can enhance the Intelligence collection and analysis capabilities of the armed forces. India is poised to become a powerhouse of AI research and innovation as a responsible AI global leader.
According to Sundar Pichai, CEO of Google, ‘AI is probably the most important thing that humanity has ever worked on’. India hosts one of the most thriving start-up ecosystems, with dozens of unicorns using AI-powered tools. They are expanding the scope for AI strategy for India and the world, especially for the global South, which this country is successfully leading as the President of the G20. (IANS)