Should AI developers be held personally liable for the biases present in their models?
Esranur Kaygin
7 replies
My opinion: Blame the code or the coder? Holding devs personally liable might sound flashy, but ignores the bigger picture. We must address the data, algorithms, and societal factors feeding the bias monster. Let's fix the system, not just punish the programmers.
Replies
Shreya Gupta@shreya_gupta02
No, but Implementing strict guidelines and a solid framework can significantly reduce bias and errors in development, leading to fairer outcomes.
Share
As an avid tech enthusiast and data science aficionado, I believe that the question of holding AI developers personally liable for the biases in their models is a complex one. Instead of pointing fingers solely at the individuals crafting the code, it's crucial to take a step back and examine the broader ecosystem at play. Let's not just blame the coder – let's focus on addressing the root causes such as biased data, flawed algorithms, and societal influences that perpetuate these issues. By fixing the system as a whole, we can create a more fair and unbiased AI landscape. Let's shift the narrative from punishment to progress.
@thestarkster Great answer, sometimes I feel like humans are just biased and we're trying to create a bias free world (AI world) while the data we feed it to build this would is Biased. In a way its like fruit of the poisonous tree.
As a company, you are definitely responsible for how you use AI models. You cannot blame the model for unfair decisions or other harm done by your application.
Bias and inaccuracies are part of the tech at this moment. So you have to deal with it.
I wrote a LI post about accountability recently following the Air Canada ruling: https://www.linkedin.com/posts/j....
Besides, it's impossible to create a model that works well and is unbiased for all use cases, so don't expect OpenAI to fix everything for everyone.
@jopie gonna copy past my prev comment here too: Just as plane manufacturers aren't directly responsible for a crash, AI developers aren't solely to blame for AI misuse. However, both industries must prioritize safety measures, like internal and external audits, and regulations. This ensures responsible practices and builds trust in their technologies.
@esranur_kaygin I agree. I think I misunderstood what you meant with AI developers first. But both the model providers (OpenAI, Google) and the companies building applications with these models have responsibilities.
Model providers should communicate clearly on the limitations and risks associated with using their tech. Preferably also provide guidelines and no-go's. I think they are dropping the ball on this right now as all you hear is how models are capable of all these amazing things.
Companies incorporating these models into their software have a responsibility for how they are used. This was what I meant here originally.
In both cases, it's not the programmer who is responsible imo, but higher management. They should be aware of the risks, take action to evaluate thoroughly and decide what can be safely deployed.