AI Bias, Corporate Responsibility & The Need For Regulation
Innovation, progress and equitability are not mutually exclusive. But achieving all three is only going to be possible with meaningful regulation.

In November 2016 AWS released Rekognition. Software that allowed for the identification of individuals in live video feeds.
It could be used by restaurateurs to identify famous diners. It could be used by charities to identify individuals being trafficked. Likewise, it could also be used by law enforcement to identify potential criminals in a crowd.
Unfortunately, it was not particularly good at identifying none white faces. Which when used by law enforcement would in all likelihood lead to higher rated of wrongful identification and arrest among marginalised groups.
AWS Takes Action, Where governments did not
Amazon & AWS are not necessarily known as the paragon of equitably, but based upon concerns of how this technology could be misused, and no doubt slightly fearful of the backlash if they were seen responsible for a system that was seen to oppress, they decided to take action.
In June 2020 AWS issued a one-year Moratorium on police usage of the software, giving governments a chance to legislate to ensure the software would be used safely.
Based on the inaction of governments. In 2021 AWS took the decision to leave the Moratorium in place indefinitely.
In 2024. The moratorium remained in place.
How this bias arose
In computing, there is an old saying;
garbage in garbage out
In this case of Rekognition. It was likely trained on poor data.
That said, even with better training data. Differences in deployment, video equipment use, facial variations across populations and model optimisation, trade-offs in performance vs accuracy could all play a part.
Current governmental inaction
While big tech do have teams actively researching bias mitigation. In 2024, we still saw companies putting out products full of bias.
While the manufacturers have taken some steps towards ethical AI deployment, the British and American governments seem to be prioritising economic growth over addressing ethical concerns. In 2025, the Trump administration has repeatedly warned about the dangers of over regulation of AI and The UK has refused to sign an international AI agreement over concerns of stifling economic growth.
When a corporation such as Amazon; who is known to have somewhat of a complex history regarding social responsibility & equitably takes a stronger ethical stance than elected governments. We should all be deeply concerned.
A Path Forward; regulation and innovation
Innovation, progress and equitability are not mutually exclusive. But achieving all three is only going to be possible with meaningful regulation. As tech companies have already demonstrated, they will not prioritise ethical concerns, unless pushed to do so.
In the UK, manufacturers that release software that leaks personal sensitive data - accidentally or otherwise, face big fines from the Information Commissioners Office (ICO). While the ICO has its limitations, it does act as a deterrent and protection for individuals in the UK.
Although bias may be more subjective than data leaks, regulation is still possible.
At a minimum; manufacturers should be testing for bias and publicly reporting the results when releasing these products.
Such reports could serve as a gorey picture on a packet of cigarettes; at least the consumers who apply the technology could not argue they did not know the dangers. They may even make more informed choices.
Enforcing the presence of this reporting could be the role of a governmental department. Such a department could also be responsible for handling reports of undocumented bias in AI models. Issuing fines for clear negligence.
Governments exist to protect their people, and must evolve in a timely manner to do so.
The argument for inaction
The critics of AI regulation argue that it will lead to the stifling of innovation. However, you measure innovation, it’s very difficult to call products that discriminate against marginalised groups ‘progress’. Instead, it is entrenching the problems of the past and, in the case of AI, exponentially.
While some argue, market forces will naturally push companies to improve this fairness of AI models. The prevailing winds in countries leading AI makes this seem more and more unlikely. In America, many companies are rolling back DEI initiatives, and the US government is gutting departments who support the groups of people that AI also disproportionally discriminates against.
Left unregulated, AI will not become fairer. It will simply discriminate more efficiently.
Looking forward
The future of AI must balance innovation and equitability. Currently, the scales seem heavily weighted to unchecked progress, at the expense of fairness and accountability.
Without guardrails, we're entrenching discrimination into powerful, data-driven systems that shape our daily lives. Regulation need not stifle innovation, but guide it.