Contact Us
    We are committed to protecting and respecting your privacy. Please review our privacy policy for more information. If you consent to us contacting you for this purpose, please tick above. By clicking Register below, you consent to allow Orion Innovation to store and process the personal information submitted above to provide you the content requested.
  • This field is for validation purposes and should be left unchanged.

The following is an excerpt from an article published on Built In, featuring quotes from Orion’s Chief Technology Officer, Rajul Rana.

What Are the Challenges of Foundation Models?

Foundation models serve as a solid starting point in AI development, but they are not without flaws. As a single point of failure, any errors, vulnerabilities and biases within a model can spread to all of the AI products built on top of it, amplifying the risks. 

Lack of Interpretability

The inner workings and decision-making processes of foundation models are often not well understood — even to the people actually making them — which makes it hard to determine how and why they arrive at certain conclusions.

“These are little black boxes,” Rajul Rana, chief technology officer at IT services company Orion Innovation, told Built In. “We know roughly how they work but [we] don’t know exactly why they generate certain outputs.”

This lack of interpretability can make it difficult to trust foundation models’ outputs or correct any errors, which can have massive consequences — especially since they are embedded in our everyday lives, from the facial recognition software used to unlock phones to the hiring algorithms companies use to screen job candidates.

Rajul Rana, Chief Technology Officer

Read the full article at builtin.com

Keep Connected