The following is an excerpt from an article published on Built In, featuring quotes from Orion’s Chief Technology Officer, Rajul Rana.
What Are the Challenges of Foundation Models?
Foundation models serve as a solid starting point in AI development, but they are not without flaws. As a single point of failure, any errors, vulnerabilities and biases within a model can spread to all of the AI products built on top of it, amplifying the risks.
Lack of Interpretability
The inner workings and decision-making processes of foundation models are often not well understood — even to the people actually making them — which makes it hard to determine how and why they arrive at certain conclusions.
“These are little black boxes,” Rajul Rana, chief technology officer at IT services company Orion Innovation, told Built In. “We know roughly how they work but [we] don’t know exactly why they generate certain outputs.”
This lack of interpretability can make it difficult to trust foundation models’ outputs or correct any errors, which can have massive consequences — especially since they are embedded in our everyday lives, from the facial recognition software used to unlock phones to the hiring algorithms companies use to screen job candidates.