The scope for application of AI in practically every field possible is making people wake up to the impact it can have when applied properly. Even though it has the capacity to create a world with significantly reduced human footprint, there are still several areas of ambiguity. If these are not addressed with caution, we may be looking at a digital disaster soon.
Development of legal frameworks
As AI is still an emerging field, the law has to continually mutate itself to adapt to the changes it undergoes. Presently, the laws regarding AI are not comprehensive which could be a limiting factor in the proper implementation of the same.
There are several regulatory problems concerning AI—for instance, which entity will be held liable when an autonomous machine causes harm, the programmer or the manufacturer of the machine? Or consider the hypothesised problem of loss of control over an intelligent machine, where, to put it crudely, it may elude the control of humans and start functioning as an intractable entity. While we may not be dealing with such realities today, they are imminent, and our legal frameworks cannot be left behind.
As the intent to commit a wrongful act is absent in a machine, implying a lack of self-awareness, we must figure out how to establish intent in such a situation. We cannot hold the programmer of an AI liable because AI learns its behavior through data and is not directed only by a piece of code.
It is imperative to set up a regulatory body that will arm us with remedial measures when there are mishaps in the functioning of an artificial intelligence. Companies involved in the development of AI must be rallied up. Such organisations will need proper incentives to fall in line—a possible solution is to hold unregistered organisations strictly liable for accidents caused by their AI whereas registered organisations could enjoy fault-based liability.
Instances of racial bias
The more data you feed to the algorithm of an AI, the more accurate its performance becomes. However, if the data has underlying biases, it is bound to reflect in the functioning of the AI.
The most famous example that demonstrates this is the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), a software used by the American judiciary. Its aim was to predict the possibility of a criminal becoming a repeat offender. However, the algorithm soon ran into trouble as it revealed biases against Black individuals.
When applying such algorithms in crucial fields such as law, one simply cannot afford to have even the slightest room for error. That said, it is undeniable that every person has underlying biases, be it conscious or unconscious. Even if we make an intentional effort to be unbiased, the nature of human cognitive functioning makes it impossible.
Ethical collection of data
As we have previously established, data is the driving factor in AI. Of late, we have been debating about the unethical ways in which big tech has been garnering data. Countries, on their part, have responded to the problem by enacting data protection laws to restrict data flow.
AI needs tons and tons of data to function efficiently, and figuring out an ethical way to collect the same is a pressing matter. “The ethical way to collect data is to do it in a way that actually improves the customer experience and to explain to them why you are collecting this information, the next step beyond that is to make it easy for your customers to opt out of data collection if they want to. Don't make it so complicated. Give control back to the customer", says Anastasia Dedyukhina, the founder of Consciously Digital, a digital wellness firm. It's easy to say, but hard to implement.
When can the objective of an autonomous machine override the requirement for individual consent? While we may not realise, the same question applies to our present day use of CCTV cameras—no individual is actively consenting to being monitored but it is imperative to maintain social order and prevent societal disintegration. With AI, the consequences are magnified, making an unambiguous answer an imperative.
AI requires a massive amount of computational power that is not readily available. Much like the mining of Bitcoins, the application of Artificial Intelligence too will temporarily be hindered by limits to computational capacity.
Consequently, integrating them into businesses becomes a hassle, especially for startups. The requirement of infrastructure to make AI that is actually useful in day-to-day life, will be a point of hindrance for many private players due to limitations in funds and power. This is why, in the implementation of AI the government holds more power. However, to maximise efficiency it would be better to strike up collaborative efforts between the public and the private sector.
Setting up robust security systems to safeguard the data is necessary before large scale implementation of AI. Security threats in fields such as medicine can prove to be fatal with immense leverage being lost against scheming entities. Training an artificial network often uses sensitive information such as fingerprints, making security of the same critical. India’s data protection laws must be able to cover these developments. Artificial Intelligence at this point in time is clay, waiting to be moulded by human civilization. Whether we mould it with greed or caution will ultimately decide the fate of our species.