A Governance Framework for AI Quality Testing in India

A Governance Framework for AI Quality Testing in India

 

P Rajagopal Tampi



Introduction

The world has known peace and nation-based governance ever since the Peace (Treaty) of Westphalia, signed after the Habsburg’s lost the war in Europe in October 1648. It subjugated the power of religion, landlords and royalty to the powers of the State and introduced the concept of nations and national boundaries within which the sovereign governance of the state reigned supreme. The peace and order that we live our lives in is dependent majorly on the concepts of nation states, defined borders, and systems of governance and acceptance by the citizens of these structures. Over the last 300 years and more, Governments have been all powerful, legislating and executing with the judiciary delivering justice.

This system works well with sufficient working legislation in place to safeguard the nation and its citizens’ interests against any mishap which can harm the citizens, their welfare, the nation’s economy and its security. When new threats arise, new laws and its executing structure and agencies need to be put in place by Governments lest the benefits provided by the nation's governance be lost or eroded.

The intention of this article is to provide the raison-de-etre for India to create Policy and Testing capability for elements in Software in general, apace with galloping technology. It is software which will be responsible for the lions share of technological progress in the foreseeable future. Countries aspiring to become global leaders like India should promote Software policy and testing capabilities. 

This article uses the case study of Artificial Intelligence (AI) to present the crucial need for software policy and testing for India.

This article does not suggest that AI is only a threat. It is being viewed as one solely for the purpose of legislating safeguards for AI deployment that are required in the interests of nations and citizens.  In the view of the author, AI is a technology innovation which should be embraced and adopted for its many benefits. Humanity should adapt to the new discovery for our own advancement. AI must be democratized. Nevertheless, like all matters, AI also comes with its drawbacks which need to be governed.

A brief on Artificial Intelligence

The exponential growth of AI poses emerging threats which needs to be studied for regulation by Governments. Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision1.

The gravity of the threat to peaceful coexistence and national interests can vary greatly based on the functionality of the AI Module (AIM) involved. Improper algorithms of AI can lead to loss and impairment of life, loss of control over aspects of our life, unethical practices, personal data theft, unfair and monopolistic commercial practices which could lead to trade and physical wars amongst a host of other concerns for human beings. Masked malicious intent of enemy nation states can be exercised in the guise of software and embedded logic to grab economic advantages, create political instability, espionage and other reasons. The recent banning of Tik-Tok in India and Huawei in the US are examples. Technology now plays a core role in geo-politics.  

AI should be used to make humans more productive, provide leverage for our thinking and analysis, and also automate many tasks. Tasks involving threat or risk to human life, unethical practices including commercial practices and Human Decision Threshold Control (HDTC) must be regulated by Governments. Else, there is a real risk of corporates usurping some of these powers sometimes unknowingly and sometimes surreptitiously upsetting the checks and balances which safeguard our peaceful existence within and amongst nations.

The need for Regulatory oversight

It is by no means sufficient to trust corporates to be correct, fair, ethical and thorough in their Quality practices due to their prime focus on business profits and shareholder returns. Having worked and seen the development of IT since the last half a century, the author confirms that code is written in most cases by programmers having a few years’ experience trying to achieve targeted objectives. Due to insufficient knowledge of the domain and in some cases lack of original thought and exhaustive questioning of the additional unhandled possibilities in the code, in the quest to get the job done without bugs, the code developed may not meet the stringent quality standards expected of AI applications. Ethical concerns, life-threatening scenarios, wrong priorities and attitudes towards fairness and ceding control (when a human intervention is called for) are all indeed possible.  Therefore, when it comes to AI in particular, it is necessary to have Regulatory oversight to ensure that citizen’s lives, safety and their rights are not compromised. Some examples below show the conflict of interest that exists today when AI is the functional area.

On 6th June 2022, Google sent a researcher on forced leave for suggesting that its AI chatbot Language Model for Dialogue Applications (LaMDA), had become sentient. The social media was agog in speculation. Had Blake Lemoine been thorough and truthful in his research? Such statements aroused curiosity, consternation and fear alike on the internet. Google responded by denying that LaMDA was sentient. It is a logical inference that Google could also have been anxious about the LaMDA project being switched off due to the weighty issues involved.

The Boeing 737 MAX suffered a recurring failure in the Maneuvering Characteristics Augmentation System (MCAS), causing two fatal crashes, Lion Air Flight 610 and Ethiopian Airlines Flight 302, in which 346 people died in total2. Major lapses lay at the doorstep of Boeing in the area of flight control software. Boeing had to pay a total of criminal monetary fines and compensation amount of over $2.5bn.

In 2015, Volkswagen cheated massively in the area of meeting emissions standards for its diesel cars. The company admitted to using special software to cheat emissions testing process in the US. The software was not part of the car’s systems and was used only to successfully obtain emission compliance certification. The fraud cost the company $35bn in regulatory fines, legal costs compensation costs3.

Barely 7 years later after the Volkswagen scandal, we find that the level of automation in cars has skyrocketed. It is possible to talk to our cars for sending messages, playing music or the news, calling our friends and many other purposes. The mother-lode of all auto applications is Autonomous Driving Module (ADM). In the light of the above arguments, the need for Quality Assurance (QA) and Quality Control (QC) of AIMs assumes critical importance.

As an illustration of AIM testing, this article will focus on the need for QA and QC in Autonomous Driving (AD) AIMs being deployed in the automotive sector, a selection based on it being a most advanced AI function and posing a threat to human life.

Autonomous driving

Autonomous driving has been implemented by Tesla and many other companies world-wide. From July 2021 to October 2022, the US Department of Transportation reported 605 crashes involving vehicles equipped with advanced driver assistance systems (ADAS) – aka autopilot – and 474 of them were Teslas, that’s three quarters of the accidents4


A Regulatory Organization for AI Module Testing

 


Fig 1 shows a proposed structure for the country’s Central Software Testing Organization.

Center for Software Policy (CSP)

The central Government should set up a Center for Software Policy. All AI and other software QA should be the responsibility of CSP. Since AI testing will be across vast areas and distributed, it will involve testing AI modules installed in automobiles, mobile phones, laptops, hospitals, R& D Labs and in the products used by corporations and individuals. Considering this, the CSP should be designed as an umbrella “think tank” body responsible only for designing and publishing Quality specifications and Testing specifications including functional descriptions of the test cases to be executed in all verticals including AI. Boundary conditions and probability of failure of the AI module (AIM) should also be defined by the CSP. These policies should be disseminated to relevant industry bodies for comments before finalization.

For AI, the following broad coverage horizontals (Categories) are recommended for preparation of test suites/cases:-

1.       Threat to Life AIM.: Whenever the functionality involves a threat to human life.

2.       Human Decision Threshold Control (HDTC) AIM: Whenever there is a doubt whether we want to allow the machine to perform the task against human intervention.

3.       Unethical/unfair/ anti-competitive practices AIM.: Self-explanatory.

Centralized Software Simulation Laboratory (CSSL)

This laboratory will get the Test cases from CSP for testing the automobile’s ADM. It will generate requisite input data and simulate the test case and register the desired test case results data. Thereafter, the ADM of vehicle under test will be plugged in replacing the simulator and fed the test inputs. The outputs of the ADM will be compared with the desired test case results data obtained from the simulator. The Bharat NCAP will work hand in hand with CSSL to provide ADM certification for the automobile sector.

Software Certification Agency (SCA)

This agency will be responsible for Software/AI testing certification along industry verticals.

An example of SCA will be the Bharat New Vehicle Safety Assessment Program (Bharat NCAP) which is India’s vehicle testing program for providing certification to automobile manufacturers. There will be other industry verticals for Telecom, Hospitals etc. SCAs are required to be governed and audited on the lines of other quality testing labs in the area of pharmacy and chemicals etc.

Illustrative Certification Process


Referring to Fig 2, while testing ADM AIM in automotive AI testing, the qualitative and quantitative requirements for data for road driving simulator with the ability to simulate various ADM test cases must be approved by the CSP. The CSSL will receive the test policy and Test cases from the CSP.  CSSL will then execute those test cases on the vehicle testing simulator. The simulated data about the test environment and vehicle traffic should form the inputs to the ADM AIM of the vehicle under test to determine its ability to take adequate and timely evasive actions in each test case. Boundary conditions and probability of failure of the ADM and resulting gravity of injury/loss of life must be recorded. Roadworthiness clearance should be given after all manner of testing including AIMs testing is successfully carried out.

Main areas of concern requiring testing in AD

Draft Categorized functional areas/test suite for Autonomous Driving Modules identified by the author are given in the table below: -

Test Case Category

 Functional Area for Test Cases

Life Threatening

Situational awareness: The ability to understand the location of the vehicle with respect to its environment of roads, dividers, static obstacles etc.

 

Understanding vectorized dynamic data on the traffic and other beings on the road.

 

Compliance of the Rules of the Road (traffic rules prevalent in the State).

 

Taking safe evasive action.

 

Normal Steering: Ability to steer normally in the environment.

 

Steering in a degraded environment of fog, rain, ice and others.

Human Decision Threshold and Control (HDTC)

Measuring collision probability in real time and calibrating the collision threshold correctly for minimum oscillations.

 

Marking decision point for evasive action.

 

Situations and threshold for activating human override over ADM inputs.

 

Human being asleep stipulations

Unethical, Unfair and Anti-Competitive

Auto vendor claims versus actual test results

 Conclusion

India is an emerging superpower and we need to increasingly shift to strategic planning in everything we do. Strategic planning and execution delivers unmatched capabilities for superpowers. Therefore, let us be one of the first countries to adopt formal AI testing laboratories and Regulations with a strategic perspective. It may be appropriate to form a separate ministry for Software (AI plus others which need regulation) since governing AI is the next big challenge for Governments. Alternatively, it should function under the Ministry of Electronics and IT (Meity).

The capability to be ahead in AI includes the ability to safeguard from fraudulent AIMs which will emerge as the sunrise industry for hackers and unnamed state actors soon. Cyber warfare is taking the world by storm and is resorted to by many nations in cloak and dagger mode. Protection against weaponized AIMs as well as the ever-present dangers from half-baked AIMs will emerge from India’s focus on Software testing laboratories.

Beginning with a real life-threatening problem such as posed by ADM, will provide us with deep insights on how to regulate in the right spirit while also inspiring the national growth of AI for the good of human beings.

 

Notes:

  1. https://www-techtarget-com.cdn.ampproject.org/v/s/www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence?amp=1&amp_gsa=1&amp_js_v=a9&usqp=mq331AQKKAFQArABIIACAw%3D%3D#amp_tf=From%20%251%24s&aoh=16731840318563&csi=1&referrer=https%3A%2F%2Fwww.google.com&ampshare=https%3A%2F%2Fwww.techtarget.com%2Fsearchenterpriseai%2Fdefinition%2FAI-Artificial-Intelligence

 2.    https://www.justice.gov/opa/pr/boeing-charged-737-max-fraud-conspiracy-and-agrees-pay-over-25-billion  

3. https://m-economictimes-com.cdn.ampproject.org/v/s/m.economictimes.com/news/international/business/volkswagen-demands-billion-euro-dieselgate-payout-from-ex-ceo-report/amp_articleshow/82220700.cms?amp_gsa=1&amp_js_v=a9&usqp=mq331AQKKAFQArABIIACAw%3D%3D#amp_tf=From%20%251%24s&aoh=16731787514993&referrer=https%3A%2F%2Fwww.google.com&ampshare=https%3A%2F%2Fm.economictimes.com%2Fnews%2Finternational%2Fbusiness%2Fvolkswagen-demands-billion-euro-dieselgate-payout-from-ex-ceo-report%2Farticleshow%2F82220700.cms  

4.       https://impakter.com/tesla-autopilot-crashes-with-at-least-a-dozen-dead-whos-fault-man-or-machine/


Copyright  © Commander  P Rajagopal Tampi IN  (R) 2023

The views expressed are solely that of the author made with the best intentions. There may be other views existing on the topic and the author's intention is not to create any conflicts. Recommendations if any are inputs for concerned agencies for their consideration. Agencies are in no way being criticized or compelled to adopt any of the recommendations.




Comments

Popular posts from this blog

UNIVERSALITY - Soul of the Universe

Corporate Social Responsibility revamp for bridging India’s wealth gap